Accounting for technologies’ unintended consequences—whether they are misinformation on social media or issues of sustainability and social justice—increasingly requires HCI to consider technology design at a societal-level scale. At this scale, public and corporate policies play a critical role in shaping technologies and user behaviors. However, the research and practices around tech and policy design have largely been held separate. How can technology design and policies better inform and coordinate with each other in generating safe new technologies? What new solutions might emerge when HCI practitioners design technology and its policies simultaneously to account for its societal impacts? This workshop addresses these questions. It will 1) identify disciplines and areas of expertise needed for a tighter, more proactive technology-and-policy-design integration, 2) launch a community of researchers, educators, and designers interested in this integration, 3) identify and publish an HCI research and education agenda towards designing technologies and technology policies simultaneously.
more »
« less
This content will become publicly available on May 11, 2025
AI and the Afterlife
AI technologies are likely to impact an array of existing practices (and give rise to a host of novel ones) around end-of-life planning, remembrance, and legacy in ways that will have profound legal, economic, emotional, and religious ramifications. At this critical moment of technological change, there is an opportunity for the HCI community to shape the discourse on this important topic through value-sensitive and community-centered approaches. This workshop will bring together a broad group of academics and practitioners with varied perspectives including HCI, AI, and other relevant disciplines (e.g., law, economics, religious studies, etc.) to support community-building, agenda-setting, and prototyping activities among scholars and practitioners interested in the nascent topic of how advances in AI will change socio-technical practices around death, remembrance, and legacy.
more »
« less
- Award ID(s):
- 2048244
- PAR ID:
- 10528213
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703317
- Page Range / eLocation ID:
- 1 to 5
- Subject(s) / Keyword(s):
- AI Generative AI AI agents HCI digital afterlife digital legacy post-mortem AI post-mortem data management end-of-life planning death
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
HCI researchers increasingly conduct emotionally demanding research in a variety of different contexts. Though scholarship has begun to address the experiences of HCI researchers conducting this work, there is a need to develop guidelines and best practices for researcher wellbeing. In this one-day CHI workshop, we will bring together a group of HCI researchers across sectors and career levels who conduct emotionally demanding research to discuss their experiences, self-care practices, and strategies for research. Based on these discussions, we will work with workshop attendees to develop best practices and guidelines for researcher wellbeing in the context of emotionally demanding HCI research; launch a repository of community-sourced resources for researcher wellbeing; document the experiences of HCI researchers conducting emotionally demanding research; and establish a community of HCI researchers conducting this type of work.more » « less
-
Fostering public AI literacy has been a growing area of interest at CHI for several years, and a substantial community is forming around issues such as teaching children how to build and program AI systems, designing learning experiences to broaden public understanding of AI, developing explainable AI systems, understanding how novices make sense of AI, and exploring the relationship between public policy, ethics, and AI literacy. Previous workshops related to AI literacy have been held at other conferences (e.g., SIGCSE, AAAI) that have been mostly focused on bringing together researchers and educators interested in AI education in K-12 classroom environments, an important subfield of this area. Our workshop seeks to cast a wider net that encompasses both HCI research related to introducing AI in K-12 education and also HCI research that is concerned with issues of AI literacy more broadly, including adult education, interactions with AI in the workplace, understanding how users make sense of and learn about AI systems, research on developing explainable AI (XAI) for non-expert users, and public policy issues related to AI literacy.more » « less
-
How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over 300,000 public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules more than doubled over the course of a year. AI rules are more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts. Platform designers and HCI researchers should heed these concerns if they hope to encourage community self-determination in the age of generative AI. We make our datasets public to enable future large-scale studies of community self-governance.more » « less
-
An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners’ current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration. We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies. We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles. In addition, in organizational contexts with a lack of resources and incentives for fairness work, practitioners often piggybacked on existing requirements (e.g., for privacy assessments) and AI development norms (e.g., the use of quantitative evaluation metrics), although they worry that these tactics may be fundamentally compromised. Finally, we draw attention to the invisible labor that practitioners take on as part of this bridging and piggybacking work to enact interdisciplinary collaboration for fairness. We close by discussing opportunities for both FAccT researchers and AI practitioners to better support cross-functional collaboration for fairness in the design and development of AI systems.more » « less