skip to main content


Title: Impacts of the Use of Machine Learning on Work Design
The increased pervasiveness of technological advancements in automation makes it urgent to address the question of how work is changing in response. Focusing on applications of machine learning (ML) to automate information tasks, we draw on a simple framework for identifying the impacts of an automated system on a task that suggests 3 patterns for the use of ML—decision support, blended decision making and complete automation. In this paper, we extend this framework by considering how automation of one task might have implications for interdependent tasks and how automation applies to coordination mechanisms.  more » « less
Award ID(s):
2026583 1745463
NSF-PAR ID:
10301600
Author(s) / Creator(s):
;
Date Published:
Journal Name:
8th International Conference on Human-Agent Interaction
Page Range / eLocation ID:
163 to 170
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Level of automation (LoA) is increasingly recognized as an important principle in improving manufacturing strategies. However, many automation decisions are made without formally assessing LoA and can be made based on a host of organizational factors, like varied mental models used by managers in decision-making. In this study, respondents (N = 186) were asked to watch five different assembly tasks being completed in an automotive manufacturing environment, and then identify “how automated” or “how manual” they perceived the task to be. Responses were given using a visual analogue scale (VAS) and sliding scale, where possible responses ranged from 0 (totally manual) to 100 (totally automated). The activity explored how and when individuals recognized the automated technologies being employed in each task. The tasks of the videos varied primarily by whether the human played active or passive role in the process. Focus group comments collected as a part of the study show how rating patterns revealed functional systems-level thinking and a focus on cognitive automation in manufacturing. While the video ratings generally followed the LoA framework discussed, slight departures in the rating of each video were found. 
    more » « less
  2. We present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it to interpret changes in US employment over the recent past. At the center of our framework is the allocation of tasks to capital and labor—the task content of production. Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand. We show how the role of changes in the task content of production—due to automation and new tasks—can be inferred from industry-level data. Our empirical decomposition suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades. 
    more » « less
  3. We document that between 50% and 70% of changes in the U.S. wage structure over the last four decades are accounted for by relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation. We develop a conceptual framework where tasks across industries are allocated to different types of labor and capital. Automation technologies expand the set of tasks performed by capital, displacing certain worker groups from jobs for which they have comparative advantage. This framework yields a simple equation linking wage changes of a demographic group to the task displacement it experiences. We report robust evidence in favor of this relationship and show that regression models incorporating task displacement explain much of the changes in education wage differentials between 1980 and 2016. The negative relationship between wage changes and task displacement is unaffected when we control for changes in market power, deunionization, and other forms of capital deepening and technology unrelated to automation. We also propose a methodology for evaluating the full general equilibrium effects of automation, which incorporate induced changes in industry composition and ripple effects due to task reallocation across different groups. Our quantitative evaluation explains how major changes in wage inequality can go hand‐in‐hand with modest productivity gains. 
    more » « less
  4. Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human agency. In this paper, we use deception detection as a testbed and investigate how we can harness explanations and predictions of machine learning models to improve human performance while retaining human agency. We propose a spectrum between full human agency and full automation, and develop varying levels of machine assistance along the spectrum that gradually increase the influence of machine predictions. We find that without showing predicted labels, explanations alone slightly improve human performance in the end task. In comparison, human performance is greatly improved by showing predicted labels (>20% relative improvement) and can be further improved by explicitly suggesting strong machine performance. Interestingly, when predicted labels are shown, explanations of machine predictions induce a similar level of accuracy as an explicit statement of strong machine performance. Our results demonstrate a tradeoff between human performance and human agency and show that explanations of machine predictions can moderate this tradeoff. 
    more » « less
  5. This Article develops a framework for both assessing and designing content moderation systems consistent with public values. It argues that moderation should not be understood as a single function, but as a set of subfunctions common to all content governance regimes. By identifying the particular values implicated by each of these subfunctions, it explores the appropriate ways the constituent tasks might best be allocated-specifically to which actors (public or private, human or technological) they might be assigned, and what constraints or processes might be required in their performance. This analysis can facilitate the evaluation and design of content moderation systems to ensure the capacity and competencies necessary for legitimate, distributed systems of content governance. Through a combination of methods, legal schemes delegate at least a portion of the responsibility for governing online expression to private actors. Sometimes, statutory schemes assign regulatory tasks explicitly. In others, this delegation often occurs implicitly, with little guidance as to how the treatment of content should be structured. In the law's shadow, online platforms are largely given free rein to configure the governance of expression. Legal scholarship has surfaced important concerns about the private sector's role in content governance. In response, private platforms engaged in content moderation have adopted structures that mimic public governance forms. Yet, we largely lack the means to measure whether these forms are substantive, effectively infusing public values into the content moderation process, or merely symbolic artifice designed to deflect much needed public scrutiny. This Article's proposed framework addresses that gap in two ways. First, the framework considers together all manner of legal regimes that induce platforms to engage in the function of content moderation. Second, it focuses on the shared set of specific tasks, or subfunctions, involved in the content moderation function across these regimes. Examining a broad range of content moderation regimes together highlights the existence of distinct common tasks and decision points that together constitute the content moderation function. Focusing on this shared set of subfunctions highlights the different values implicated by each and the way they can be "handed off' to human and technical actors to perform in different ways with varying normative and political implications. This Article identifies four key content moderation subfunctions: (1) definition of policies, (2) identification of potentially covered content, (3) application of policies to specific cases, and (4) resolution of those cases. Using these four subfunctions supports a rigorous analysis of how to leverage the capacities and competencies of government and private parties throughout the content moderation process. Such attention also highlights how the exercise of that power can be constrained-either by requiring the use of particular decision-making processes or through limits on the use of automation-in ways that further address normative concerns. Dissecting the allocation of subfunctions in various content moderation regimes reveals the distinct ethical and political questions that arise in alternate configurations. Specifically, it offers a way to think about four key questions: (1) what values are most at issue regarding each subfunction; (2) which activities might be more appropriate to delegate to particular public or private actors; (3) which constraints must be attached to the delegation of each subfunction; and (4) where can investments in shared content moderation infrastructures support relevant values? The functional framework thus provides a means for both evaluating the symbolic legal forms that firms have constructed in service of content moderation and for designing processes that better reflect public values. 
    more » « less