skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on March 11, 2026

Title: Deskilling and upskilling with AI systems
Introduction. Deskilling is a long-standing prediction of the use of information technology, raised anew by the increased capabilities of AI (AI) systems. A review of studies of AI applications suggests that deskilling (or levelling of ability) is a common outcome, but systems can also require new skills, i.e., upskilling. Method. To identify which settings are more likely to yield deskilling vs. upskilling, we propose a model of a human interacting with an AI system for a task. The model highlights the possibility for a worker to develop and exhibit (or not) skills in prompting for, and evaluation and editing of system output, thus yielding upskilling or deskilling. Findings. We illustrate these model-predicted effects on work with examples of current studies of AI-based systems. Conclusions. We discuss organizational implications of systems that deskill or upskill workers and suggest future research directions.  more » « less
Award ID(s):
2129047
PAR ID:
10592287
Author(s) / Creator(s):
;
Publisher / Repository:
National Library of Sweden
Date Published:
Journal Name:
Information Research an international electronic journal
Volume:
30
Issue:
iConf
ISSN:
1368-1613
Page Range / eLocation ID:
1009 to 1023
Subject(s) / Keyword(s):
Generative AI, Deskilling, Upskilling
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Rodrigo, M.M.; Matsuda, N.; A.I., Dimitrova (Ed.)
    This article presents the background and vision of the Skills-based Talent Ecosystem for Upskilling (STEP UP) project. STEP UP is a collaboration among teams participating in the US National Science Foundation (NSF) Convergence Accelerator program, which supports translational use-inspired research. This article details the context for this work, describes the individual projects and the roles of AI in these projects, and explains how these projects are working synergistically towards the ambitious goals of increasing equity and efficiency in the US talent pipeline through skills-based training. The technologies that support this vision range in maturity from laboratory technologies to field-tested prototypes to production software and include applications of Natural Language Understanding and Machine Learning that have only become feasible over the past two to three years. 
    more » « less
  2. This research-to-practice paper presents a curriculum, “AI Literacy for All,” to promote an interdisciplinary under-standing of AI, its socio-technical implications, and its practical applications for all levels of education. With the rapid evolution of artificial intelligence (AI), there is a need for AI literacy that goes beyond the traditional AI education curriculum. AI literacy has been conceptualized in various ways, including public literacy, competency building for designers, conceptual understanding of AI concepts, and domain-specific upskilling. Most of these conceptualizations were established before the public release of Generative AI (Gen-AI) tools such as ChatGPT. AI education has focused on the principles and applications of AI through a technical lens that emphasizes the mastery of AI principles, the mathematical foundations underlying these technologies, and the programming and mathematical skills necessary to implement AI solutions. The non-technical component of AI literacy has often been limited to social and ethical implications, privacy and security issues, or the experience of interacting with AI. In AI Literacy for all, we emphasize a balanced curriculum that includes technical as well as non-technical learning outcomes to enable a conceptual understanding and critical evaluation of AI technologies in an interdisciplinary socio-technical context. The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI. While it is important to include all learning outcomes for AI education in a Computer Science major, the learning outcomes can be adjusted for other learning contexts, including, non-CS majors, high school summer camps, the adult workforce, and the public. This paper advocates for a shift in AI literacy education to offer a more interdisciplinary socio-technical approach as a pathway to broaden participation in AI. This approach not only broadens students' perspectives but also prepares them to think critically about integrating AI into their future professional and personal lives. 
    more » « less
  3. We describe a human-centered and design-based stance towards generating explanations in AI agents. We collect questions about the working of an AI agent through participatory design by fo- cus groups. We capture an agent’s design through a Task-Method-Knowledge model that explicitly specifies the agent’s tasks and goals, as well as the mechanisms, knowledge and vocabulary it uses for accomplishing the tasks. We illustrate our approach through the generation of explanations in Skillsync, an AI agent that links companies and colleges for worker upskilling and reskilling. In particular, we embed a question-answering agent called AskJill in Skillsync, where AskJill contains a TMK model of Skillsync’s design. AskJill presently answers human-generated questions about Skillsync’s tasks and vocabulary, and thereby helps explain how it produces its recommendations. 
    more » « less
  4. Nudging is a behavioral strategy aimed at influencing people’s thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment. 
    more » « less
  5. In recent years, Artificial Intelligence (AI) systems have achieved revolutionary capabilities, providing intelligent solutions that surpass human skills in many cases. However, such capabilities come with power-hungry computation workloads. Therefore, the implementation of hardware acceleration becomes as fundamental as the software design to improve energy efficiency, silicon area, and latency of AI systems. Thus, innovative hardware platforms, architectures, and compiler-level approaches have been used to accelerate AI workloads. Crucially, innovative AI acceleration platforms are being adopted in application domains for which dependability must be paramount, such as autonomous driving, healthcare, banking, space exploration, and industry 4.0. Unfortunately, the complexity of both AI software and hardware makes the dependability evaluation and improvement extremely challenging. Studies have been conducted on both the security and reliability of AI systems, such as vulnerability assessments and countermeasures to random faults and analysis for side-channel attacks. This paper describes and discusses various reliability and security threats in AI systems, and presents representative case studies along with corresponding efficient countermeasures. 
    more » « less