skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Implementing Artificial Intelligence in Healthcare
This report will discuss implementing artificial intelligence in healthcare. Artificial Intelligence would be beneficial to healthcare because of the endless opportunities it provides. AI can be used to help detect and cure diseases, help patients with a path to treatment and even assist doctors with surgeries. Within this paper I will talk to you about the benefits of AI in healthcare and how it can be implemented using cyber security. In addition, I will conduct interviews with doctors and nurses to hear their perspective on AI in hospitals and how it is needed as well. As well as create a survey for nursing students at my university to see what their viewpoints are on adding AI unto the field of medicine. The best method to incorporate both user input and research into this paper is to use user input to back up the research. User input will be great addition because it gives the readers a real-world opinion on if this topic is valid.  more » « less
Award ID(s):
1754054
PAR ID:
10528923
Author(s) / Creator(s):
;
Publisher / Repository:
The 2024 ADMI Symposium.
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This report will discuss the importance of network security. Network Security is important because it prevents hackers from gaining access to data and personal information. The issue in society is that users get their data stolen every day and are scared that their information is blasted out to the world. Within this paper I will talk to you about the importance of network security and how it can change your day-to-day life using cyber security. In addition, I will create a survey for computer science majors to see if network security is important. Also, I will send a survey to a DISA employee to get his perspective on this topic and his comments as well. The best method to incorporate both user input and research into this paper is to use user input to back up the research. User input will be a great addition because it gives the readers a real-world opinion on if this topic is valid. 
    more » « less
  2. Artificial Intelligence, intelligence demonstrated by machines, has emerged as one of the most convenient and personable applications of everyday life. Specifically, AI powers digital personal assistants to answer user questions and automate everyday tasks. AI Assistants listen continuously to answer the user, even when not in use. Why is this a problem? For a hacker, this makes any digital assistant a potential listening device, a major security and privacy issue. While some companies are handling this situation well, others are falling behind as their AI components are slowly dying in the consumer market. Which digital assistant is best and most secure you may ask? This paper will first detail how each AI assistant works from a technical perspective. Then based on survey results, this paper will detail how AI Assistants rank in terms of overall security and performance 
    more » « less
  3. Abstract As artificial intelligence (AI) methods are increasingly used to develop new guidance intended for operational use by forecasters, it is critical to evaluate whether forecasters deem the guidance trustworthy. Past trust-related AI research suggests that certain attributes (e.g., understanding how the AI was trained, interactivity, and performance) contribute to users perceiving the AI as trustworthy. However, little research has been done to examine the role of these and other attributes for weather forecasters. In this study, we conducted 16 online interviews with National Weather Service (NWS) forecasters to examine (i) how they make guidance use decisions and (ii) how the AI model technique used, training, input variables, performance, and developers as well as interacting with the model output influenced their assessments of trustworthiness of new guidance. The interviews pertained to either a random forest model predicting the probability of severe hail or a 2D convolutional neural network model predicting the probability of storm mode. When taken as a whole, our findings illustrate how forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically or at first introduction. We recommend developers center end users when creating new AI guidance tools, making end users integral to their thinking and efforts. This approach is essential for the development of useful andusedtools. The details of these findings can help AI developers understand how forecasters perceive AI guidance and inform AI development and refinement efforts. Significance StatementWe used a mixed-methods quantitative and qualitative approach to understand how National Weather Service (NWS) forecasters 1) make guidance use decisions within their operational forecasting process and 2) assess the trustworthiness of prototype guidance developed using artificial intelligence (AI). When taken as a whole, our findings illustrate that forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically and suggest that developers must center the end user when creating new AI guidance tools to ensure that the developed tools are useful andused. 
    more » « less
  4. We propose and test a novel graph learning-based explainable artificial intelligence (XAI) approach to address the challenge of developing explainable predictions of patient length of stay (LoS) in intensive care units (ICUs). Specifically, we address a notable gap in the literature on XAI methods that identify interactions between model input features to predict patient health outcomes. Our model intrinsically constructs a patient-level graph, which identifies the importance of feature interactions for prediction of health outcomes. It demonstrates state-of-the-art explanation capabilities based on identification of salient feature interactions compared with traditional XAI methods for prediction of LoS. We supplement our XAI approach with a small-scale user study, which demonstrates that our model can lead to greater user acceptance of artificial intelligence (AI) model-based decisions by contributing to greater interpretability of model predictions. Our model lays the foundation to develop interpretable, predictive tools that healthcare professionals can utilize to improve ICU resource allocation decisions and enhance the clinical relevance of AI systems in providing effective patient care. Although our primary research setting is the ICU, our graph learning model can be generalized to other healthcare contexts to accurately identify key feature interactions for prediction of other health outcomes, such as mortality, readmission risk, and hospitalizations. 
    more » « less
  5. Today’s artificial intelligence (AI) systems rely heavily on Artificial Neural Networks (ANNs), yet their black box nature induces risk of catastrophic failure and harm. In order to promote verifiably safe AI, my research will determine constraints on incentives from a game-theoretic perspective, tie those constraints to moral knowledge as represented by a knowledge graph, and reveal how neural models meet those constraints with novel interpretability methods. Specifically, I will develop techniques for describing models’ decision-making processes by predicting and isolating their goals, especially in relation to values derived from knowledge graphs. My research will allow critical AI systems to be audited in service of effective regulation. 
    more » « less