skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 11, 2025

Title: Generative AI Going Awry: Enabling Designers to Proactively Avoid It in CSCW Applications
The rapid development and deployment of generative AI technologies creates a design challenge of how to proactively understand the implications of productizing and deploying these new technologies, especially with regard to negative design implications. This is especially concerning in CSCW applications, where AI agents can introduce misunderstandings or even misdirections with the people interacting with the agent. In this panel, researchers from academia and industry will reflect on their experiences with ideas, methods, and processes to enable designers to proactively shape the responsible design of genAI in collaborative applications. The panelists represent a range of different approaches, including speculative fiction, design activities, design toolkits, and process guides. We hope that the panel encourages a discussion in the CSCW community around techniques we can put into practice today to enable the responsible design of genAI.  more » « less
Award ID(s):
2048244
PAR ID:
10623857
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400711145
Page Range / eLocation ID:
125 to 127
Subject(s) / Keyword(s):
Generative AI, design, redteaming
Format(s):
Medium: X
Location:
San Jose Costa Rica
Sponsoring Org:
National Science Foundation
More Like this
  1. What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing. 
    more » « less
  2. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  3. The authors discuss how exposure to Generative Artificial Intelligence (GenAI) tools led them to reconsider their approach to supporting secondary preservice teachers in curriculum development and to acknowledge both the challenges and opportunities these technologies present in teacher preparation. An upper division undergraduate class designed to support curricular development of all secondary teachers was redesigned to incorporate the use of large language models (LLMs) such as ChatGPT. The findings indicate that GenAI assists in planning, assessment design, and the development of assignments that require human insight beyond AI capabilities. The study highlights the benefits of GenAI in curriculum development and addresses concerns about academic integrity and equitable access. The authors recommend that educators explore GenAI’s potential to support learning and develop strategies for its responsible use in teacher preparation. 
    more » « less
  4. Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools for mental well-being. With the increased integration of GenAI, it is important to examine individuals’ attitudes and trust in GenAI chatbots’ support for SA. Through a mixed-method approach that involved surveys (n = 159) and interviews (n = 17), we found that individuals with severe symptoms tended to trust and embrace GenAI chatbots more readily, valuing their non-judgmental support and perceived emotional comprehension. However, those with milder symptoms prioritized technical reliability. We identified factors influencing trust, such as GenAI chatbots’ ability to generate empathetic responses and its context-sensitive limitations, which were particularly important among individuals with SA. We also discuss the design implications and use of GenAI chatbots in fostering cognitive and emotional trust, with practical and design considerations. 
    more » « less
  5. Generative artificial intelligence (GenAI) systems introduce new possibilities for enhancing professionals’ workflows, enabling novel forms of human–AI co-creation. However, professionals often strug- gle to learn to work with GenAI systems effectively. While research has begun to explore the design of interfaces that support users in learning to co-create with GenAI, we lack systematic approaches to investigate the effectiveness of these supports. In this paper, we present a systematic approach for studying how to support learn- ing to co-create with GenAI systems, informed by methods and concepts from the learning sciences. Through an experimental case study, we demonstrate how our approach can be used to study and compare the impacts of different types of learning supports in the context of text-to-image GenAI models. Reflecting on these results, we discuss directions for future work aimed at improving interfaces for human–AI co-creation. 
    more » « less