skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 11, 2026

Title: Deskilling and upskilling with AI systems
Introduction. Deskilling is a long-standing prediction of the use of information technology, raised anew by the increased capabilities of AI (AI) systems. A review of studies of AI applications suggests that deskilling (or levelling of ability) is a common outcome, but systems can also require new skills, i.e., upskilling. Method. To identify which settings are more likely to yield deskilling vs. upskilling, we propose a model of a human interacting with an AI system for a task. The model highlights the possibility for a worker to develop and exhibit (or not) skills in prompting for, and evaluation and editing of system output, thus yielding upskilling or deskilling. Findings. We illustrate these model-predicted effects on work with examples of current studies of AI-based systems. Conclusions. We discuss organizational implications of systems that deskill or upskill workers and suggest future research directions.  more » « less
Award ID(s):
2129047
PAR ID:
10592287
Author(s) / Creator(s):
;
Publisher / Repository:
National Library of Sweden
Date Published:
Journal Name:
Information Research an international electronic journal
Volume:
30
Issue:
iConf
ISSN:
1368-1613
Page Range / eLocation ID:
1009 to 1023
Subject(s) / Keyword(s):
Generative AI, Deskilling, Upskilling
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Rodrigo, M.M.; Matsuda, N.; A.I., Dimitrova (Ed.)
    This article presents the background and vision of the Skills-based Talent Ecosystem for Upskilling (STEP UP) project. STEP UP is a collaboration among teams participating in the US National Science Foundation (NSF) Convergence Accelerator program, which supports translational use-inspired research. This article details the context for this work, describes the individual projects and the roles of AI in these projects, and explains how these projects are working synergistically towards the ambitious goals of increasing equity and efficiency in the US talent pipeline through skills-based training. The technologies that support this vision range in maturity from laboratory technologies to field-tested prototypes to production software and include applications of Natural Language Understanding and Machine Learning that have only become feasible over the past two to three years. 
    more » « less
  2. We describe a human-centered and design-based stance towards generating explanations in AI agents. We collect questions about the working of an AI agent through participatory design by fo- cus groups. We capture an agent’s design through a Task-Method-Knowledge model that explicitly specifies the agent’s tasks and goals, as well as the mechanisms, knowledge and vocabulary it uses for accomplishing the tasks. We illustrate our approach through the generation of explanations in Skillsync, an AI agent that links companies and colleges for worker upskilling and reskilling. In particular, we embed a question-answering agent called AskJill in Skillsync, where AskJill contains a TMK model of Skillsync’s design. AskJill presently answers human-generated questions about Skillsync’s tasks and vocabulary, and thereby helps explain how it produces its recommendations. 
    more » « less
  3. Nudging is a behavioral strategy aimed at influencing people’s thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment. 
    more » « less
  4. In recent years, Artificial Intelligence (AI) systems have achieved revolutionary capabilities, providing intelligent solutions that surpass human skills in many cases. However, such capabilities come with power-hungry computation workloads. Therefore, the implementation of hardware acceleration becomes as fundamental as the software design to improve energy efficiency, silicon area, and latency of AI systems. Thus, innovative hardware platforms, architectures, and compiler-level approaches have been used to accelerate AI workloads. Crucially, innovative AI acceleration platforms are being adopted in application domains for which dependability must be paramount, such as autonomous driving, healthcare, banking, space exploration, and industry 4.0. Unfortunately, the complexity of both AI software and hardware makes the dependability evaluation and improvement extremely challenging. Studies have been conducted on both the security and reliability of AI systems, such as vulnerability assessments and countermeasures to random faults and analysis for side-channel attacks. This paper describes and discusses various reliability and security threats in AI systems, and presents representative case studies along with corresponding efficient countermeasures. 
    more » « less
  5. The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems – 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks. 
    more » « less