There is a substantial and ever-growing corpus of evidence and literature exploring the impacts of Artificial intelligence (AI) technologies on society, politics, and humanity as a whole. A separate, parallel body of work has explored existential risks to humanity, including but not limited to that stemming from unaligned Artificial General Intelligence (AGI). In this paper, we problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk by acting as intermediate risk factors, and that this potential is not limited to the unaligned AGI scenario. We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors, magnifying the likelihood of previously identified sources of existential risk. Moreover, future developments in the coming decade hold the potential to significantly exacerbate these risk factors, even in the absence of artificial general intelligence. Our main contribution is a (non-exhaustive) exposition of potential AI risk factors and the causal relationships between them, focusing on how AI can affect power dynamics and information security. This exposition demonstrates that there exist causal pathways from AI systems to existential risks that do not presuppose hypothetical future AI capabilities.
more »
« less
Managing the risks of artificial general intelligence: A human factors and ergonomics perspective
Abstract Artificial General Intelligence (AGI) is the next and forthcoming evolution of Artificial Intelligence (AI). Though there could be significant benefits to society, there are also concerns that AGI could pose an existential threat. The critical role of Human Factors and Ergonomics (HFE) in the design of safe, ethical, and usable AGI has been emphasized; however, there is little evidence to suggest that HFE is currently influencing development programs. Further, given the broad spectrum of HFE application areas, it is not clear what activities are required to fulfill this role. This article presents the perspectives of 10 researchers working in AI safety on the potential risks associated with AGI, the HFE concepts that require consideration during AGI design, and the activities required for HFE to fulfill its critical role in what could be humanity's final invention. Though a diverse set of perspectives is presented, there is broad agreement that AGI potentially poses an existential threat, and that many HFE concepts should be considered during AGI design and operation. A range of critical activities are proposed, including collaboration with AGI developers, dissemination of HFE work in other relevant disciplines, the embedment of HFE throughout the AGI lifecycle, and the application of systems HFE methods to help identify and manage risks.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10514444
- Publisher / Repository:
- Wiley Periodicals, LLC
- Date Published:
- Journal Name:
- Human Factors and Ergonomics in Manufacturing & Service Industries
- Volume:
- 33
- Issue:
- 5
- ISSN:
- 1090-8471
- Page Range / eLocation ID:
- 366 to 378
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
There are many initiatives that teach Artificial Intelligence (AI) literacy to K-12 students. Most downsize college-level instructional materials to grade-level appropriate formats, overlooking students' unique perspectives in the design of curricula. To investigate the use of educational games as a vehicle for uncovering youth's understanding of AI instruction, we co-designed games with 39 Black, Hispanic, and Asian high school girls and non-binary youth to create engaging learning materials for their peers. We conducted qualitative analyses on the designed game artifacts, student discourse, and their feedback on the efficacy of learning activities. This study highlights the benefits of co-design and learning games to uncover students' understanding and ability to apply AI concepts in game-based learning, their emergent perspectives of AI, and the prior knowledge that informs their game design choices. Our research uncovers students' AI misconceptions and informs the design of educational games and grade-level appropriate AI instruction.more » « less
-
There is growing awareness of the central role that artificial intelligence (AI) plays now and in children's futures. This has led to increasing interest in engaging K-12 students in AI education to promote their understanding of AI concepts and practices. Leveraging principles from problem-based pedagogies and game-based learning, our approach integrates AI education into a set of unplugged activities and a game-based learning environment. In this work, we describe outcomes from our efforts to co design problem-based AI curriculum with elementary school teachers.more » « less
-
As artificial intelligence (AI) profoundly reshapes our personal and professional lives, there are growing calls to support pre-college aged youth as they develop capacity to engage critically and productively with AI. While efforts to introduce AI concepts to pre-college aged youth have largely focused on older teens, there is growing recognition of the importance of developing AI literacy among younger children. Today’s youth already encounter and use AI regularly, but they might not yet be aware of its role, limitations, risks, or purpose in a particular encounter, and may not be positioned to question whether it should be doing what it’s doing. In response to this critical moment to develop AI learning experiences that can support children at this age, researchers and learning designers at the University of California’s Lawrence Hall of Science, in collaboration with AI developers at the University of Southern California’s Institute for Creative Technologies, have been iteratively developing and studying a series of interactive learning experiences for public science centers and similar out-of-school settings. The project is funded through a grant by the National Science Foundation and the resulting exhibit, The Virtually Human Experience (VHX), represents one of the first interactive museum exhibits in the United States designed explicitly to support young children and their families in developing understanding of AI. The coordinated experiences in VHX include both digital (computer-based) and non-digital (“unplugged”) activities designed to engage children (ages 7-12) and their families in learning about AI. In this paper, we describe emerging insights from a series of case studies that track small groups of museum visitors (e.g. a parent and two children) as they interact with the exhibit. The case studies reveal opportunities and challenges associated with designing AI learning experiences for young children in a free-choice environment like a public science center. In particular, we focus on three themes emerging from our analyses of case data: 1) relationships between design elements and collaborative discourse within intergenerational groups (i.e., families and other adult-child pairings); 2) relationships between design elements and impromptu visitor experimentation within the exhibit space; and 3) challenges in designing activities with a low threshold for initial engagement such that even the youngest visitors can engage meaningfully with the activity. Findings from this study are directly relevant to support researchers and learning designers engaged in rapidly expanding efforts to develop AI learning opportunities for youth, and are likely to be of interest to a broad range of researchers, designers, and practitioners as society encounters this transformative technology and its applications become increasingly integral to how we live and work.more » « less
-
Abstract Design artifacts provide a mechanism for illustrating design information and concepts, but their effectiveness relies on alignment across design agents in what these artifacts represent. This work investigates the agreement between multi-modal representations of design artifacts by humans and artificial intelligence (AI). Design artifacts are considered to constitute stimuli designers interact with to become inspired (i.e., inspirational stimuli), for which retrieval often relies on computational methods using AI. To facilitate this process for multi-modal stimuli, a better understanding of human perspectives of non-semantic representations of design information, e.g., by form or function-based features, is motivated. This work compares and evaluates human and AI-based representations of 3D-model parts by visual and functional features. Humans and AI were found to share consistent representations of visual and functional similarities, which aligned well with coarse, but not more granular, levels of similarity. Human–AI alignment was higher for identifying low compared to high similarity parts, suggesting mutual representation of features underlying more obvious than nuanced differences. Human evaluation of part relationships in terms of belonging to the same or different categories revealed that human and AI-derived relationships similarly reflect concepts of “near” and “far.” However, levels of similarity corresponding to “near” and “far” differed depending on the criteria evaluated, where “far” was associated with nearer visually than functionally related stimuli. These findings contribute to a fundamental understanding of human evaluation of information conveyed by AI-represented design artifacts needed for successful human–AI collaboration in design.more » « less
An official website of the United States government

