Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups. 
                        more » 
                        « less   
                    
                            
                            Making Transparency Influencers: A Case Study of an Educational Approach to Improve Responsible AI Practices in News and Media
                        
                    
    
            Concerns about the risks posed by artificial intelligence (AI) have resulted in growing interest in algorithmic transparency. While algorithmic transparency is well-studied, there is evidence that many organizations do not value implementing transparency. In this case study, we test a ground-up approach to ensuring better real-world algorithmic transparency by creating transparency influencers — motivated individuals within organizations who advocate for transparency. We held an interactive online workshop on algorithmic transparency and advocacy for 15 professionals from news, media, and journalism. We reflect on workshop design choices and presents insights from participant interviews. We found positive evidence for our approach: In the days following the workshop, three participants had done pro-transparency advocacy. Notably, one of them advocated for algorithmic transparency at an organization-wide AI strategy meeting. In the words of a participant: “if you are questioning whether or not you need to tell people [about AI], you need to tell people.” 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10514477
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
- ISBN:
- 9798400703317
- Page Range / eLocation ID:
- 1 to 8
- Subject(s) / Keyword(s):
- responsible AI transparency explainability artificial intelligence machine learning tempered radicals
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.more » « less
- 
            In this session, we will consider how to use place-based data to build your case to sponsors for funding research at your institution, particularly for sponsors who operate on the national or international scale. The setting of your institution—the communities it developed in, the region where it operates, and the people it reaches and serves—is key for conveying its unique capacities and potentials, and for making sponsors eager to bring you into their funding portfolio. How can data help you introduce yourself as an institution and tell your story in geographical and economic context? In this session we will explore US Census data, other federal data hubs, and research and reporting from organizations such as the Pew Research Center or the National Bureau of Economic Research (NBER). We will cover the benefits and challenges of working with raw data; identify suitable data types for certain purposes, such as diversity and equity issues; and consider what kinds of data and presentations are most compelling to different types of funders. With greater awareness of what data and tools are available, you can “put yourself on the map” and paint a vivid picture of your community for prospective funders. Presented at the 2024 Research Analytics Summit in Albuquerque, NMmore » « less
- 
            This workshop will provide strategies and techniques for designing and executing computational petrology research projects and will engage participants in using software called Rhyolite-MELTS and the Magma Chamber Simulator (MCS) to address questions about open system magma evolution. Participants will: Be introduced to petrologic and geochemical questions that can be addressed by computational tools such as Rhyolite-MELTS and MCS. Be presented with case studies that utilize these computational tools to address petrologic questions. Be introduced to computational research design strategies and data management techniques. Learn the limits of thermodynamic databases and the functionality of computational methods when applied to natural systems. Collaborate and discuss strategies to apply these techniques to petrologic scenarios provided by the conveners. Have the opportunity to pose questions to MCS and Rhyolite-MELTS experts that will aid in the set-up of their computational projects. Network and benefit from the experiences and expertise of other scientists. Petrologists of all levels are encouraged to join the workshop! If you need training on the use of these tools, we will provide Zoom sessions prior to the workshop, with dates to be determined. If you have already taken an MCS workshop or attended a MELTS short course, please consider joining us again for additional training on research project design and execution. MCS and rhyolite-MELTS can also be used as teaching tools for those interested in integration into petrology/geochemistry classes, so please sign up if you would like to use these tools in your classes. The workshop will take place Tuesday, 1st October and Wednesday, 2 October, 08:00-13:30 MST/UTC-7 on both days. Registration is done through the Goldschmidt2024 conference registration form. If you are registering for the workshop only and not participating in the conference, on the Registration Options page of the form, under "Conference Options", please select "Science Workshop Only Remote (no conference attendance)", then choose this workshop in the section "Post-Conference Science Workshop: Remote (October 2024)" before proceeding to payment.more » « less
- 
            An initial exploratory study examined basic parameters of the sustainability mindset in an historically underrepresented group within engineering. An NSF water quality engineering research project engaged citizen scientists from vulnerable Latinx families in design, construction, and use of acrylic concrete structures for rainwater harvesting. During the start, middle, and end of the project, participants were asked to share their perceptions of sustainability through a series of exploratory focus groups questions: “How do you feel about droughts in the region; can you please tell me what you know about drought-resiliency; do you know ways a person might be able to conserve water during a drought; can you please tell me what you know about water quality testing?” Three coders (an environmental engineer, a civil engineer, and a sociologist) conducted a domain analysis of the focus group to determine emergent themes reflecting the sustainability mindset of the citizen scientists. Preliminary results show that between the onset and conclusion of the rainwater harvesting project, participants increasingly articulated their thoughts on sustainability in a future-oriented context requiring collective action in a broader, community sense. The preliminary findings have implications for sustainability- focused engineering outreach and crowdsourcing efforts.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    