skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Human, AI, Robot Teaming and the Future of Work: Barriers and Opportunities for Advancement
Global investments in artificial intelligence (AI) and robotics are on the rise, with the results to impact global economies, security, safety, and human well-being. The most heralded advances in this space are more often about the technologies that are capable of disrupting business-as-usual than they are about innovation that advances or supports a global workforce. The Future of Work at the Human-Technology Frontier is one of NSF’s 10 Big Ideas for research advancement. This panel discussion focuses on the barriers and opportunities for a future of human and AI/robot teaming, with people at the center of complex systems that provide social, ethical, and economic value.  more » « less
Award ID(s):
1936997
PAR ID:
10285718
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
64
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
62 to 66
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The prevalence and success of AI applications have been tempered by concerns about the controllability of AI systems about AI's impact on the future of work. These concerns reflect two aspects of a central question: how would humans work with AI systems? While research on AI safety focuses on designing AI systems that allow humans to safely instruct and control AI systems, research on AI and the future of work focuses on the impact of AI on humans who may be unable to do so. This Blue Sky Ideas paper proposes a unifying set of declarative principles that enable a more uniform evaluation of arbitrary AI systems along multiple dimensions of the extent to which they are suitable for use by specific classes of human operators. It leverages recent AI research and the unique strengths of the field to develop human-centric principles for AI systems that address the concerns noted above. 
    more » « less
  2. Abstract Millions of individuals who have limited or no functional speech use augmentative and alternative communication (AAC) technology to participate in daily life and exercise the human right to communication. While advances in AAC technology lag significantly behind those in other technology sectors, mainstream technology innovations such as artificial intelligence (AI) present potential for the future of AAC. However, a new future of AAC will only be as effective as it is responsive to the needs and dreams of the people who rely upon it every day. AAC innovation must reflect an iterative, collaborative process with AAC users. To do this, we worked collaboratively with AAC users to complete participatory qualitative research about AAC innovation through AI. We interviewed 13 AAC users regarding (1) their current AAC engagement; (2) the barriers they experience in using AAC; (3) their dreams regarding future AAC development; and (4) reflections on potential AAC innovations. To analyze these data, a rapid research evaluation and appraisal was used. Within this article, the themes that emerged during interviews and their implications for future AAC development will be discussed. Strengths, barriers, and considerations for participatory design will also be described. 
    more » « less
  3. A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human–machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’. 
    more » « less
  4. Power, Mary (Ed.)
    Research in both ecology and AI strives for predictive understanding of complex systems, where nonlinearities arise from multidimensional interactions and feedbacks across multiple scales. After a century of independent, asynchronous advances in computational and ecological research, we foresee a critical need for intentional synergy to meet current societal challenges against the backdrop of global change. These challenges include understanding the unpredictability of systems-level phenomena and resilience dynamics on a rapidly changing planet. Here, we spotlight both the promise and the urgency of a convergence research paradigm between ecology and AI. Ecological systems are a challenge to fully and holistically model, even using the most prominent AI technique today: deep neural networks. Moreover, ecological systems have emergent and resilient behaviors that may inspire new, robust AI architectures and methodologies. We share examples of how challenges in ecological systems modeling would benefit from advances in AI techniques that are themselves inspired by the systems they seek to model. Both fields have inspired each other, albeit indirectly, in an evolution toward this convergence. We emphasize the need for more purposeful synergy to accelerate the understanding of ecological resilience whilst building the resilience currently lacking in modern AI systems, which have been shown to fail at times because of poor generalization in different contexts. Persistent epistemic barriers would benefit from attention in both disciplines. The implications of a successful convergence go beyond advancing ecological disciplines or achieving an artificial general intelligence—they are critical for both persisting and thriving in an uncertain future. 
    more » « less
  5. Mahmoud, Ali B. (Ed.)
    Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately. 
    more » « less