Responsible Data Science (RDS) and Responsible AI (RAI) have emerged as prominent areas of research and practice. Yet, educational materials and methodologies on this important subject still lack. In this paper, I will recount my experience in developing, teaching, and refining a technical course called “Responsible Data Science”, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection. I will also describe a public education course called “We are AI: Taking Control of Technology” that brings these topics of AI ethics to the general audience in a peer-learning setting. I made all course materials are publicly available online, hoping to inspire others in the community to come together to form a deeper understanding of the pedagogical needs of RDS and RAI, and to develop and share the much-needed concrete educational materials and methodologies. 
                        more » 
                        « less   
                    
                            
                            PACE: Participatory AI for Community Engagement
                        
                    
    
            Public sector leverages artificial intelligence (AI) to enhance the efficiency, transparency, and accountability of civic operations and public services. This includes initiatives such as predictive waste management, facial recognition for identification, and advanced tools in the criminal justice system. While public-sector AI can improve efficiency and accountability, it also has the potential to perpetuate biases, infringe on privacy, and marginalize vulnerable groups. Responsible AI (RAI) research aims to address these concerns by focusing on fairness and equity through participatory AI. We invite researchers, community members, and public sector workers to collaborate on designing, developing, and deploying RAI systems that enhance public sector accountability and transparency. Key topics include raising awareness of AI's impact on the public sector, improving access to AI auditing tools, building public engagement capacity, fostering early community involvement to align AI innovations with public needs, and promoting accessible and inclusive participation in AI development. The workshop will feature two keynotes, two short paper sessions, and three discussion-oriented activities. Our goal is to create a platform for exchanging ideas and developing strategies to design community-engaged RAI systems while mitigating the potential harms of AI and maximizing its benefits in the public sector. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10579707
- Publisher / Repository:
- AAAI Press
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
- Volume:
- 12
- ISSN:
- 2769-1330
- Page Range / eLocation ID:
- 151 to 154
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Responsible AI (RAI) is the science and practice of ensuring the design, development, use, and oversight of AI are socially sustainable---benefiting diverse stakeholders while controlling the risks. Achieving this goal requires active engagement and participation from the broader public. This paper introduces We are AI: Taking Control of Technology, a public education course that brings the topics of AI and RAI to the general audience in a peer-learning setting. We outline the goals behind the course's development, discuss the multi-year iterative process that shaped its creation, and summarize its content. We also discuss two offerings of We are AI to an active and engaged group of librarians and professional staff at New York University, highlighting successes and areas for improvement. The course materials, including a multilingual comic book series by the same name, are publicly available and can be used independently. By sharing our experience in creating and teaching We are AI, we aim to introduce these resources to the community of AI educators, researchers, and practitioners, supporting their public education efforts.more » « less
- 
            Abstract The use of algorithms and automated systems, especially those leveraging artificial intelligence (AI), has been exploding in the public sector, but their use has been controversial. Ethicists, public advocates, and legal scholars have debated whether biases in AI systems should bar their use or if the potential net benefits, especially toward traditionally disadvantaged groups, justify even greater expansion. While this debate has become voluminous, no scholars of which we are aware have conducted experiments with the groups affected by these policies about how they view the trade-offs. We conduct a set of two conjoint experiments with a high-quality sample of 973 Americans who identify as Black or African American in which we randomize the levels of inter-group disparity in outcomes and the net effect on such adverse outcomes in two highly controversial contexts: pre-trial detention and traffic camera ticketing. The results suggest that respondents are willing to tolerate some level of disparity in outcomes in exchange for certain net improvements for their community. These results turn this debate from an abstract ethical argument into an evaluation of political feasibility and policy design based on empirics.more » « less
- 
            Industry will take everything it can in developing artificial intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good. Given these inevitabilities, lawmakers will need to change their usual approach to regulating technology. Procedural approaches requiring transparency and consent will not be enough. Merely regulating use of data ignores how information collection and the affordances of tools bestow and exercise power. A better approach involves duties, design rules, defaults, and data dead ends. This layered approach will more squarely address dangerous extraction, harmful normalization, and adversarial self-dealing to better ensure that AI deployments advance the public good.more » « less
- 
            There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    