skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Generative AI Going Awry: Enabling Designers to Proactively Avoid It in CSCW Applications
The rapid development and deployment of generative AI technologies creates a design challenge of how to proactively understand the implications of productizing and deploying these new technologies, especially with regard to negative design implications. This is especially concerning in CSCW applications, where AI agents can introduce misunderstandings or even misdirections with the people interacting with the agent. In this panel, researchers from academia and industry will reflect on their experiences with ideas, methods, and processes to enable designers to proactively shape the responsible design of genAI in collaborative applications. The panelists represent a range of different approaches, including speculative fiction, design activities, design toolkits, and process guides. We hope that the panel encourages a discussion in the CSCW community around techniques we can put into practice today to enable the responsible design of genAI.  more » « less
Award ID(s):
2048244
PAR ID:
10623857
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400711145
Page Range / eLocation ID:
125 to 127
Subject(s) / Keyword(s):
Generative AI, design, redteaming
Format(s):
Medium: X
Location:
San Jose Costa Rica
Sponsoring Org:
National Science Foundation
More Like this
  1. What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing. 
    more » « less
  2. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  3. The authors discuss how exposure to Generative Artificial Intelligence (GenAI) tools led them to reconsider their approach to supporting secondary preservice teachers in curriculum development and to acknowledge both the challenges and opportunities these technologies present in teacher preparation. An upper division undergraduate class designed to support curricular development of all secondary teachers was redesigned to incorporate the use of large language models (LLMs) such as ChatGPT. The findings indicate that GenAI assists in planning, assessment design, and the development of assignments that require human insight beyond AI capabilities. The study highlights the benefits of GenAI in curriculum development and addresses concerns about academic integrity and equitable access. The authors recommend that educators explore GenAI’s potential to support learning and develop strategies for its responsible use in teacher preparation. 
    more » « less
  4. This research-to-practice paper presents a curriculum, “AI Literacy for All,” to promote an interdisciplinary under-standing of AI, its socio-technical implications, and its practical applications for all levels of education. With the rapid evolution of artificial intelligence (AI), there is a need for AI literacy that goes beyond the traditional AI education curriculum. AI literacy has been conceptualized in various ways, including public literacy, competency building for designers, conceptual understanding of AI concepts, and domain-specific upskilling. Most of these conceptualizations were established before the public release of Generative AI (Gen-AI) tools such as ChatGPT. AI education has focused on the principles and applications of AI through a technical lens that emphasizes the mastery of AI principles, the mathematical foundations underlying these technologies, and the programming and mathematical skills necessary to implement AI solutions. The non-technical component of AI literacy has often been limited to social and ethical implications, privacy and security issues, or the experience of interacting with AI. In AI Literacy for all, we emphasize a balanced curriculum that includes technical as well as non-technical learning outcomes to enable a conceptual understanding and critical evaluation of AI technologies in an interdisciplinary socio-technical context. The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI. While it is important to include all learning outcomes for AI education in a Computer Science major, the learning outcomes can be adjusted for other learning contexts, including, non-CS majors, high school summer camps, the adult workforce, and the public. This paper advocates for a shift in AI literacy education to offer a more interdisciplinary socio-technical approach as a pathway to broaden participation in AI. This approach not only broadens students' perspectives but also prepares them to think critically about integrating AI into their future professional and personal lives. 
    more » « less
  5. Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools for mental well-being. With the increased integration of GenAI, it is important to examine individuals’ attitudes and trust in GenAI chatbots’ support for SA. Through a mixed-method approach that involved surveys (n = 159) and interviews (n = 17), we found that individuals with severe symptoms tended to trust and embrace GenAI chatbots more readily, valuing their non-judgmental support and perceived emotional comprehension. However, those with milder symptoms prioritized technical reliability. We identified factors influencing trust, such as GenAI chatbots’ ability to generate empathetic responses and its context-sensitive limitations, which were particularly important among individuals with SA. We also discuss the design implications and use of GenAI chatbots in fostering cognitive and emotional trust, with practical and design considerations. 
    more » « less