skip to main content

Title: Consider the Human Work Experience When Integrating Robotics in the Workplace
Worldwide, manufacturers are reimagining the future of their workforce and its connection to technology. Rather than replacing humans, Industry 5.0 explores how humans and robots can best complement one another's unique strengths. However, realizing this vision requires an in-depth understanding of how workers view the positive and negative attributes of their jobs, and the place of robots within it. In this paper, we explore the relationship between work attributes and automation goals by engaging in field research at a manufacturing plant. We conducted 50 face-to-face interviews with assembly-line workers (n=50), which we analyzed using discourse analysis and social constructivist methods. We found that the work attributes deemed most positive by participants include social interaction, movement and exercise, (human) autonomy, problem solving, task variety, and building with their hands. The main negative work attributes included health and safety issues, feeling rushed, and repetitive work. We identified several ways robots could help reduce negative work attributes and enhance positive ones, such as reducing work interruptions and cultivating physical and psychological well-being. Based on our findings, we created a set of integration considerations for organizations planning to deploy robotics technology, and discuss how the manufacturing and HRI communities can explore these ideas in more » the future. « less
; ; ;
Award ID(s):
Publication Date:
Journal Name:
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
Page Range or eLocation-ID:
75 to 84
Sponsoring Org:
National Science Foundation
More Like this
  1. Despite promises about the near-term potential of social robots to share our daily lives, they remain unable to form autonomous, lasting, and engaging relationships with humans. Many companies are deploying social robots into the consumer and commercial market; however, both the companies and their products are relatively short lived for many reasons. For example, current social robots succeed in interacting with humans only within controlled environments, such as research labs, and for short time periods since longer interactions tend to provoke user disengagement. We interviewed 13 roboticists from robot manufacturing companies and research labs to delve deeper into the design process for social robots and unearth the many challenges robot creators face. Our research questions were: 1) What are the different design processes for creating social robots? 2) How are users involved in the design of social robots? 3) How are teams of robot creators constituted? Our qualitative investigation showed that varied design practices are applied when creating social robots but no consensus exists about an optimal or standard one. Results revealed that users have different degrees of involvement in the robot creation process, from no involvement to being a central part of robot development. Results also uncovered the needmore »for multidisciplinary and international teams to work together to create robots. Drawing upon these insights, we identified implications for the field of Human-Robot Interaction that can shape the creation of best practices for social robot design.« less
  2. Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects' conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants' attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants' attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions' impact on others, rather than themselves. Though with reluctance, a minority ofmore »participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US.« less
  3. Technologies in the workplace have been a major focus of CSCW, including studies that investigate technologies for collaborative work, explore new work environments, and address the importance of political and organizational aspects of technologies in workplaces. Emerging technologies, such as AI and robotics, have been deployed in various workplaces, and their proliferation is rapidly expanding. These technologies have not only changed the nature of work but also reinforced power and social dynamics within workplaces, requiring us to rethink the legitimate relationship between emerging technologies and human workers. It will be critical to the development of equitable future work arrangements to identify how these emerging technologies will develop relationships with human workers who have limited power and voice in their workplaces. How can these emerging technologies develop mutually beneficial partnerships with human workers? In this one-day workshop, we seek to illustrate the meaning of human-machine partnerships (HMP) by highlighting that how we define HMP may shape the design of future robots at work. By incorporating interdisciplinary perspectives, we aim to develop a taxonomy of HMP by which we can broaden our relationship with embodied agents but also evaluate and reconsider existing theoretical, methodological, and epistemological challenges in HMP research.
  4. Disassembly is an integral part of maintenance, upgrade, and remanufacturing operations to recover end-of-use products. Optimization of disassembly sequences and the capability of robotic technology are crucial for managing the resource-intensive nature of dismantling operations. This study proposes an optimization framework for disassembly sequence planning under uncertainty considering human-robot collaboration. The proposed model combines three attributes: disassembly cost, disassembleability, and safety, to find the optimal path for dismantling a product and assigning each disassembly operation among humans and robots. The multi-attribute utility function has been employed to address uncertainty and make a tradeoff among multiple attributes. The disassembly time reflects the cost of disassembly and is assumed to be an uncertain parameter with a Beta probability density function; the disassembleability evaluates the feasibility of conducting operations by robot; finally, the safety index ensures the safety of human workers in the work environment. The optimization model identifies the best disassembly sequence and makes tradeoffs among multi-attributes. An example of a computer desktop illustrates how the proposed model works. The model identifies the optimal disassembly sequence with less disassembly cost, high disassembleability, and increased safety index while allocating disassembly operations between human and robot. A sensitivity analysis is conducted to show themore »model's performance when changing the disassembly cost for the robot.« less
  5. Abstract

    In this paper, I analyze the experiences of the world's largest all‐women community health workforce through the lens of liminality. Originally used to describe transition from one state to the other, the concept of liminality in the study of work and organizations can frame workers' experiences of being in‐between established structures and roles in varying degrees, times, and/or places. India's ASHAs, or Accredited Social Health Activists, are community women at the frontlines of the state's health care provisioning. But the state does not categorize them as workers or employees. ASHAs are considered volunteers. Instead of salaries, they are paid task‐based incentives. Based on 14 months of ethnographic fieldwork, including 80 interviews, I find that ASHAs' liminal occupational status as ‘paid volunteers’ produces conditions of chronic underpayment and control for them, further lowering their already low wages. This has implications for how we understand the gender wage gap. I argue that we need to consider not just how much women are paid, but how the payment is structured, and how that places marginalized women workers in relation to others in the workplace. Moving beyond whether liminality is a negative or positive experience, future research should delineate the conditions under which liminalitymore »is negative or positive.

    « less