Accounting for technologies’ unintended consequences—whether they are misinformation on social media or issues of sustainability and social justice—increasingly requires HCI to consider technology design at a societal-level scale. At this scale, public and corporate policies play a critical role in shaping technologies and user behaviors. However, the research and practices around tech and policy design have largely been held separate. How can technology design and policies better inform and coordinate with each other in generating safe new technologies? What new solutions might emerge when HCI practitioners design technology and its policies simultaneously to account for its societal impacts? This workshop addresses these questions. It will 1) identify disciplines and areas of expertise needed for a tighter, more proactive technology-and-policy-design integration, 2) launch a community of researchers, educators, and designers interested in this integration, 3) identify and publish an HCI research and education agenda towards designing technologies and technology policies simultaneously.
more »
« less
AI and the Afterlife
AI technologies are likely to impact an array of existing practices (and give rise to a host of novel ones) around end-of-life planning, remembrance, and legacy in ways that will have profound legal, economic, emotional, and religious ramifications. At this critical moment of technological change, there is an opportunity for the HCI community to shape the discourse on this important topic through value-sensitive and community-centered approaches. This workshop will bring together a broad group of academics and practitioners with varied perspectives including HCI, AI, and other relevant disciplines (e.g., law, economics, religious studies, etc.) to support community-building, agenda-setting, and prototyping activities among scholars and practitioners interested in the nascent topic of how advances in AI will change socio-technical practices around death, remembrance, and legacy.
more »
« less
- Award ID(s):
- 2048244
- PAR ID:
- 10528213
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703317
- Page Range / eLocation ID:
- 1 to 5
- Subject(s) / Keyword(s):
- AI Generative AI AI agents HCI digital afterlife digital legacy post-mortem AI post-mortem data management end-of-life planning death
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Trained and optimized for typical and fluent speech, speech AI works poorly for people with speech diversities, often interrupting them and misinterpreting their speech. The increasing deployment of speech AI in automated phone menus, AI-conducted job interviews, and everyday devices poses tangible risks to people with speech diversities. To mitigate these risks, this workshop aims to build a multidisciplinary coalition and set the research agenda for fair and accessible speech AI. Bringing together a broad group of academics and practitioners with diverse perspectives, including HCI, AI, and other relevant fields such as disability studies, speech language pathology, and law, this workshop will establish a shared understanding of the technical challenges for fair and accessible speech AI, as well as its ramifications in design, user experience, policy, and society. In addition, the workshop will invite and highlight first-person accounts from people with speech diversities, facilitating direct dialogues and collaboration between speech AI developers and the impacted communities. The key outcomes of this workshop include a summary paper that synthesizes our learnings and outlines the roadmap for improving speech AI for people with speech diversities, as well as a community of scholars, practitioners, activists, and policy makers interested in driving progress in this domain.more » « less
-
HCI researchers increasingly conduct emotionally demanding research in a variety of different contexts. Though scholarship has begun to address the experiences of HCI researchers conducting this work, there is a need to develop guidelines and best practices for researcher wellbeing. In this one-day CHI workshop, we will bring together a group of HCI researchers across sectors and career levels who conduct emotionally demanding research to discuss their experiences, self-care practices, and strategies for research. Based on these discussions, we will work with workshop attendees to develop best practices and guidelines for researcher wellbeing in the context of emotionally demanding HCI research; launch a repository of community-sourced resources for researcher wellbeing; document the experiences of HCI researchers conducting emotionally demanding research; and establish a community of HCI researchers conducting this type of work.more » « less
-
How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over 300,000 public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules more than doubled over the course of a year. AI rules are more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts. Platform designers and HCI researchers should heed these concerns if they hope to encourage community self-determination in the age of generative AI. We make our datasets public to enable future large-scale studies of community self-governance.more » « less
-
Fostering public AI literacy has been a growing area of interest at CHI for several years, and a substantial community is forming around issues such as teaching children how to build and program AI systems, designing learning experiences to broaden public understanding of AI, developing explainable AI systems, understanding how novices make sense of AI, and exploring the relationship between public policy, ethics, and AI literacy. Previous workshops related to AI literacy have been held at other conferences (e.g., SIGCSE, AAAI) that have been mostly focused on bringing together researchers and educators interested in AI education in K-12 classroom environments, an important subfield of this area. Our workshop seeks to cast a wider net that encompasses both HCI research related to introducing AI in K-12 education and also HCI research that is concerned with issues of AI literacy more broadly, including adult education, interactions with AI in the workplace, understanding how users make sense of and learn about AI systems, research on developing explainable AI (XAI) for non-expert users, and public policy issues related to AI literacy.more » « less
An official website of the United States government

