skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 1, 2025

Title: Empathy and AI: Achieving Equitable Microtransit for Underserved Communities
This paper describes a newly launched project that will produce a new approach to public microtransit for underserved communities. Public microtransit cannot rely on pricing signals to manage demand, and current approaches face the challenges of simultaneously being underutilized and overextended. This project conceives of the setting as a sociotechnical system. Its main idea is to engage users through AI agents in conjunction with platform constraints to find solutions that purely technical conceptions cannot find. The project was specified over an intense series of discussions with key stakeholders (riders, city government, and nongovernmental agencies) and brings together expertise in the disciplines of AI, Operations Research, Urban Planning, Psychology, and Community Development. The project will culminate in a pilot study, results from which will facilitate the transfer of its technology to additional communities.  more » « less
Award ID(s):
2325720
PAR ID:
10538107
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
International Joint Conferences on Artificial Intelligence Organization
Date Published:
Volume:
30
ISBN:
978-1-956792-04-1
Page Range / eLocation ID:
7179 to 7187
Format(s):
Medium: X
Location:
Jeju, South Korea
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding “how to optimize the production of scientific knowledge” is paramount to those who support scientific research—funders as well as research institutions—to the communities served, and to researchers. Structured archives can help all involved to learn what decisions and processes help or hinder the production of new knowledge. Using artificial intelligence (AI) and large language models (LLMs), we recently created the first structured digital representation of the historic archives of the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health. This work yielded a digital knowledge base of entities, topics, and documents that can be used to probe the inner workings of the Human Genome Project, a massive international public-private effort to sequence the human genome, and several of its offshoots like The Cancer Genome Atlas (TCGA) and the Encyclopedia of DNA Elements (ENCODE). The resulting knowledge base will be instrumental in understanding not only how the Human Genome Project and genomics research developed collaboratively, but also how scientific goals come to be formulated and evolve. Given the diverse and rich data used in this project, we evaluated the ethical implications of employing AI and LLMs to process and analyze this valuable archive. As the first computational investigation of the internal archives of a massive collaborative project with multiple funders and institutions, this study will inform future efforts to conduct similar investigations while also considering and minimizing ethical challenges. Our methodology and risk-mitigating measures could also inform future initiatives in developing standards for project planning, policymaking, enhancing transparency, and ensuring ethical utilization of artificial intelligence technologies and large language models in archive exploration.Author Contributions: Mohammad Hosseini: Investigation; Project Administration; Writing – original draft; Writing – review & editing. Spencer Hong: Conceptualization, Data curation, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing. Thomas Stoeger: Conceptualization; Investigation; Project Administration; Supervision; Writing – original draft; Writing – review & editing. Kristi Holmes: Funding acquisition, Supervision, Writing – review & editing. Luis A. Nunes Amaral: Funding acquisition, Supervision, Writing – review & editing. Christopher Donohue: Conceptualization, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing. Kris Wetterstrand: Conceptualization, Funding acquisition, Project administration. 
    more » « less
  2. Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement. 
    more » « less
  3. Public sector leverages artificial intelligence (AI) to enhance the efficiency, transparency, and accountability of civic operations and public services. This includes initiatives such as predictive waste management, facial recognition for identification, and advanced tools in the criminal justice system. While public-sector AI can improve efficiency and accountability, it also has the potential to perpetuate biases, infringe on privacy, and marginalize vulnerable groups. Responsible AI (RAI) research aims to address these concerns by focusing on fairness and equity through participatory AI. We invite researchers, community members, and public sector workers to collaborate on designing, developing, and deploying RAI systems that enhance public sector accountability and transparency. Key topics include raising awareness of AI's impact on the public sector, improving access to AI auditing tools, building public engagement capacity, fostering early community involvement to align AI innovations with public needs, and promoting accessible and inclusive participation in AI development. The workshop will feature two keynotes, two short paper sessions, and three discussion-oriented activities. Our goal is to create a platform for exchanging ideas and developing strategies to design community-engaged RAI systems while mitigating the potential harms of AI and maximizing its benefits in the public sector. 
    more » « less
  4. Although development of Artificial Intelligence (AI) technologies has been underway for decades, the acceleration of AI capabilities and rapid expansion of user access in the past few years has elicited public excitement as well as alarm. Leaders in government and academia, as well as members of the public, are recognizing the critical need for the ethical production and management of AI. As a result, society is placing immense trust in engineering undergraduate and graduate programs to train future developers of AI in their ethical and public welfare responsibilities. In this paper, we investigate whether engineering master’s students believe they receive the training they need from their educational curricula to negotiate this complex ethical landscape. The goal of the broader project is to understand how engineering students become public welfare “watchdogs”; i.e., how they learn to recognize and respond to their public welfare responsibilities. As part of this project, we conducted in-depth interviews with 62 electrical and computer engineering master’s students at a large public university about their educational experiences and understanding of engineers’ professional responsibilities, including those related specifically to AI technologies. This paper asks, (1) do engineering master’s students see potential dangers of AI related to how the technologies are developed, used, or possibly misused? (2) Do they feel equipped to handle the challenges of these technologies and respond ethically when faced with difficult situations? (3) Do they hold their engineering educators accountable for training them in ethical concerns around AI? We find that although some engineering master’s students see exciting possibilities of AI, most are deeply concerned about the ethical and public welfare issues that accompany its advancement and deployment. While some students feel equipped to handle these challenges, the majority feel unprepared to manage these complex situations in their professional work. Additionally, students reported that the ethical development and application of technologies like AI is often not included in curricula or are viewed as “soft skills” that are not as important as “technical” knowledge. Although some students we interviewed shared the sense of apathy toward these topics that they see from their engineering program, most were eager to receive more training in AI ethics. These results underscore the pressing need for engineering education programs, including graduate programs, to integrate comprehensive ethics, public responsibility, and whistleblower training within their curricula to ensure that the engineers of tomorrow are well-equipped to address the novel ethical dilemmas of AI that are likely to arise in the coming years. 
    more » « less
  5. There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences. 
    more » « less