skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Benchmarked Ethics: A Roadmap to AI Alignment, Moral Knowledge, and Control
Today’s artificial intelligence (AI) systems rely heavily on Artificial Neural Networks (ANNs), yet their black box nature induces risk of catastrophic failure and harm. In order to promote verifiably safe AI, my research will determine constraints on incentives from a game-theoretic perspective, tie those constraints to moral knowledge as represented by a knowledge graph, and reveal how neural models meet those constraints with novel interpretability methods. Specifically, I will develop techniques for describing models’ decision-making processes by predicting and isolating their goals, especially in relation to values derived from knowledge graphs. My research will allow critical AI systems to be audited in service of effective regulation.  more » « less
Award ID(s):
2147305
PAR ID:
10493530
Author(s) / Creator(s):
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
ISBN:
9798400702310
Page Range / eLocation ID:
964 to 965
Format(s):
Medium: X
Location:
Montreal QC Canada
Sponsoring Org:
National Science Foundation
More Like this
  1. Artificial Intelligence (AI) systems for mental healthcare (MHCare) have been ever-growing after realizing the importance of early interventions for patients with chronic mental health (MH) conditions. Social media (SocMedia) emerged as the go-to platform for supporting patients seeking MHCare. The creation of peer-support groups without social stigma has resulted in patients transitioning from clinical settings to SocMedia supported interactions for quick help. Researchers started exploring SocMedia content in search of cues that showcase correlation or causation between different MH conditions to design better interventional strategies. User-level Classification-based AI systems were designed to leverage diverse SocMedia data from various MH conditions, to predict MH conditions. Subsequently, researchers created classification schemes to measure the severity of each MH condition. Such ad-hoc schemes, engineered features, and models not only require a large amount of data but fail to allow clinically acceptable and explainable reasoning over the outcomes. To improve Neural-AI for MHCare, infusion of clinical symbolic knowledge that clinicans use in decision making is required. An impactful use case of Neural-AI systems in MH is conversational systems. These systems require coordination between classification and generation to facilitate humanistic conversation in conversational agents (CA). Current CAs with deep language models lack factual correctness, medical relevance, and safety in their generations, which intertwine with unexplainable statistical classification techniques. This lecture-style tutorial will demonstrate our investigations into Neuro-symbolic methods of infusing clinical knowledge to improve the outcomes of Neural-AI systems to improve interventions for MHCare:(a) We will discuss the use of diverse clinical knowledge in creating specialized datasets to train Neural-AI systems effectively. (b) Patients with cardiovascular disease express MH symptoms differently based on gender differences. We will show that knowledge-infused Neural-AI systems can identify gender-specific MH symptoms in such patients. (c) We will describe strategies for infusing clinical process knowledge as heuristics and constraints to improve language models in generating relevant questions and responses. 
    more » « less
  2. Collective action by gig knowledge workers is a potent method for enhancing labor conditions on platforms like Upwork, Amazon Mechanical Turk, and Toloka. However, this type of collective action is still rare today. Existing systems for supporting collective action are inadequate for workers to identify and understand their different workplace problems, plan effective solutions, and put the solutions into action. This talk will discuss how with my research lab we are creating worker-centric AI enhanced technologies that enable collective action among gig knowledge workers. Building solid AI enhanced technologies to enable gig worker collective action will pave the way for a fair and ethical gig economy—one with fair wages, humane working conditions, and increased job security. I will discuss how my proposed approach involves first integrating "sousveillance," a concept by Foucault, into the technologies. Sousveillance involves individuals or groups using surveillance tools to monitor and record those in positions of power. In this case, the technologies enable gig workers to monitor their workplace and their algorithmic bosses, giving them access to their own workplace data for the first time. This facilitates the first stage of collective action: problem identification. I will then discuss how we combine this data with Large-Language-Models (LLMs) and social theories to create intelligent assistants that guide workers to complete collective action via sensemaking and solution implementation. The talk will present a set of case studies to showcase this vision of designing data driven AI technologies to power gig worker collective action. In particular, I will present the systems: 1) GigSousveillance which allows workers to monitor and collect their own job-related data, facilitating quantification of workplace problems; 2) GigSense equips workers with an AI assistant that facilitates sensemaking of their work problems, helping workers to strategically devise solutions to their challenges; 3) GigAction is an AI assistant that guides workers to implement their proposed solutions. I will discuss how we are designing and implementing these systems by adopting a participatory design approach with workers, while also conducting experiments and longitudinal deployments in the real world. I conclude by presenting a research agenda for transforming and rethinking the role of A.I. in our workplaces; and researching effective socio-technical solutions in favor of a worker-centric future and countering technoauthoritarianism 
    more » « less
  3. This report will discuss implementing artificial intelligence in healthcare. Artificial Intelligence would be beneficial to healthcare because of the endless opportunities it provides. AI can be used to help detect and cure diseases, help patients with a path to treatment and even assist doctors with surgeries. Within this paper I will talk to you about the benefits of AI in healthcare and how it can be implemented using cyber security. In addition, I will conduct interviews with doctors and nurses to hear their perspective on AI in hospitals and how it is needed as well. As well as create a survey for nursing students at my university to see what their viewpoints are on adding AI unto the field of medicine. The best method to incorporate both user input and research into this paper is to use user input to back up the research. User input will be great addition because it gives the readers a real-world opinion on if this topic is valid. 
    more » « less
  4. Abstract Recently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI. 
    more » « less
  5. This paper presents a conversational pipeline for crafting domain knowledge for complex neuro-symbolic models through natural language prompts. It leverages large language models to generate declarative programs in the DomiKnowS framework. The programs in this framework express concepts and their relationships as a graph in addition to logical constraints between them. The graph, later, can be connected to trainable neural models according to those specifications. Our proposed pipeline utilizes techniques like dynamic in-context demonstration retrieval, model refinement based on feedback from a symbolic parser, visualization, and user interaction to generate the tasks’ structure and formal knowledge representation. This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models in the DomiKnowS framework. 
    more » « less