skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning
In this paper, we outline a new method for evaluating the human impact of machine-learning (ML) applications. In partnership with Underwriters Laboratories Inc., we have developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed. By examining the use of artificial intelligence (AI) in particular domains, such as journalism, criminal justice, or law, we can develop more nuanced and practically relevant understandings of key ethical guidelines for artificial intelligence. By decoupling the extraction of the facts of the matter from the evaluation of the impact of the resulting systems, we create a framework for the process of assessing impact that has two distinctly different phases.  more » « less
Award ID(s):
1917707
PAR ID:
10319637
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
AI & SOCIETY
ISSN:
0951-5666
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Artificial intelligence and recent advances in deep learning architectures, including transformer networks and large language models, change the way people think and act to solve problems. Software engineering, as an increasingly complex process to design, develop, test, deploy, and maintain large-scale software systems for solving real-world challenges, is profoundly affected by many revolutionary artificial intelligence tools in general and machine learning in particular. In this roadmap for artificial intelligence in software engineering, we highlight the recent deep impact of artificial intelligence on software engineering by discussing successful stories of applications of artificial intelligence to classic and new software development challenges. We identify the new challenges that the software engineering community has to address in the coming years to successfully apply artificial intelligence in software engineering, and we share our research roadmap toward the effective use of artificial intelligence in the software engineering profession, while still protecting fundamental human values. We spotlight three main areas that challenge the research in software engineering: the use of generative artificial intelligence and large language models for engineering large software systems, the need of large and unbiased datasets and benchmarks for training and evaluating deep learning and large language models for software engineering, and the need of a new code of digital ethics to apply artificial intelligence in software engineering. 
    more » « less
  2. Pranjol, Zahid (Ed.)
    This perspective article focuses on the exploration and advocacy of approaches to be considered in designing equitable learning experiences for students’ use of artificial intelligence, machine learning, and technology through the Universal Design for Learning Framework (UDL) exemplifying chemistry examples that can be applied to any course in STEM. The use of artificial intelligence (AI) and machine learning are causing disruptions within learning in higher education and is also casting a spotlight on systemic inequities particularly affecting minoritized groups broadly and in STEM fields. Particularly, the emergence of AI has focused on inequities toward minoritized students in academic and professional ethics. As the U.S. education system grapples with a nuanced mix of acceptance and hesitation towards AI, the necessity for inclusive and equitable education, impactful learning practices, and innovative strategies has become more pronounced. Promoting equitable approaches for the use of artificial intelligence and technology in STEM learning will be an important milestone in addressing STEM disparities toward minoritized groups and equitable accessibility to evolving technology. 
    more » « less
  3. The ecosystem for automated offensive security tools has grown in recent years. As more tools automate offensive security techniques via Artificial Intelligence (AI) and Machine Learning (ML), it may result in vulnerabilities due to adversarial attacks. Therefore, it is imperative that research is conducted to help understand the techniques used by these security tools. Our work explores the current state of the art in offensive security tools. First, we employ an abstract model that can be used to understand what phases of an Offensive Cyber Operation (OCO) can be automated. We then adopt a generalizable taxonomy, and apply it to automation tools (such as normal automation and the use of artificial intelligence in automation). We then curated a dataset of tools and research papers and quantitatively analyzed it. Our work resulted in a public dataset that includes analysis of (n=57) papers and OCO tools that are mapped to the the MITRE ATT&CK Framework enterprise techniques, applicable phases of our OCO model, and the details of the automation technique. The results show a need for a granular expansion on the ATT&CK Exploit Public-Facing application technique. A critical finding is that most OCO tools employed Simple Rule Based automation, hinting at a lucrative research opportunity for the use of Artificial Intelligence (AI) and Machine Learning (ML) in future OCO tooling. 
    more » « less
  4. Many organizations seek to ensure that machine learning (ML) and artificial intelligence (AI) systems work as intended in production but currently do not have a cohesive methodology in place to do so. To fill this gap, we propose MLTE (Machine Learning Test and Evaluation, colloquially referred to as "melt"), a framework and implementation to evaluate ML models and systems. The framework compiles state-of-the-art evaluation techniques into an organizational process for interdisciplinary teams, including model developers, software engineers, system owners, and other stakeholders. MLTE tooling supports this process by providing a domain-specific language that teams can use to express model requirements, an infrastructure to define, generate, and collect ML evaluation metrics, and the means to communicate results. 
    more » « less
  5. In this tutorial, we present our recent work on building trusted, resilient and interpretable AI models by combining symbolic methods developed for automated reasoning with connectionist learning methods that use deep neural networks. The increasing adoption of artificial intelligence and machine learning in systems, including safety-critical systems, has created a pressing need for developing scalable techniques that can be used to establish trust over their safe behavior, resilience to adversarial attacks, and interpretability to enable human audits. This tutorial is comprised of three components: review of techniques for verification of neural networks, methods for using geometric invariants to defend against adversarial attacks, and techniques for extracting logical symbolic rules by reverse engineering machine learning models. These techniques form the core of TRINITY: Trusted, Resilient and Interpretable AI framework being developed at SRI. In this tutorial, we identify the key challenges in building the TRINITY framework, and report recent results on each of these three fronts. 
    more » « less