Abstract Artificially intelligent communication technologies (AICTs) that operate autonomously with high degrees of conversational fluency can make communication decisions on behalf of their principal users and communicate with those principals’ audiences on their behalf. In this study, we explore how the involvement of AICTs in communication activities shapes how principals engage in impression management and how their communication partners form impressions of them. Through an inductive, comparative field study of users of two AI scheduling technologies, we uncover three communicative practices through which principals engaged in impression management when AICTs communicate on their behalf: interpretation, diplomacy, and staging politeness. We also uncover three processes through which communication partners formed impressions of principals when communicating with them via AICTs: confirmation, transference, and compartmentalization. We show that communication partners can transfer impressions of AICTs to principals themselves and outline the conditions under which such transference is and is not likely. We discuss the implications of these findings for the study of technological mediation of impression management and formation in the age of artificial intelligence and present propositions to guide future empirical research.
more »
« less
Designing Fiduciary Artificial Intelligence
A fiduciary is a trusted agent that has the legal duty to act with loyalty and care towards a principal that employs them. When fiduciary organizations interact with users through a digital interface, or otherwise automate their operations with artificial intelligence, they will need to design these AI systems to be compliant with their duties. This article synthesizes recent work in computer science and law to develop a procedure for designing and auditing Fiduciary AI. The designer of a Fiduciary AI should understand the context of the system, identify its principals, and assess the best interests of those principals. Then the designer must be loyal with respect to those interests, and careful in an contextually appropriate way. We connect the steps in this procedure to dimensions of Trustworthy AI, such as privacy and alignment. Fiduciary AI is a promising means to address the incompleteness of data subject’s consent when interacting with complex technical systems.
more »
« less
- Award ID(s):
- 2105301
- PAR ID:
- 10472957
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
- ISBN:
- 9798400703812
- Page Range / eLocation ID:
- 1 to 15
- Subject(s) / Keyword(s):
- Security and privacy Human and societal aspects of security and privacy Social and professional topics Computing / technology policy Computing methodologies Artificial intelligence.
- Format(s):
- Medium: X
- Location:
- Boston MA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.more » « less
-
Foundation Models (FMs) are gaining increasing attention in the biomedical artificial intelligence (AI) ecosystem due to their ability to represent and contextualize multimodal biomedical data. These capabilities make FMs a valuable tool for a variety of tasks, including biomedical reasoning, hypothesis generation, and interpreting complex imaging data. In this review paper, we address the unique challenges associated with establishing an ethical and trustworthy biomedical AI ecosystem, with a particular focus on the development of FMs and their downstream applications. We explore strategies that can be implemented throughout the biomedical AI pipeline to effectively tackle these challenges, ensuring that these FMs are translated responsibly into clinical and translational settings. Additionally, we emphasize the importance of key stewardship and co-design principles that not only ensure robust regulation but also guarantee that the interests of all stakeholders—especially those involved in or affected by these clinical and translational applications—are adequately represented. We aim to empower the biomedical AI community to harness these models responsibly and effectively. As we navigate this exciting frontier, our collective commitment to ethical stewardship, co-design, and responsible translation will be instrumental in ensuring that the evolution of FMs truly enhances patient care and medical decision-making, ultimately leading to a more equitable and trustworthy biomedical AI ecosystem.more » « less
-
Abstract The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum’s capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningfulbenefitorassistanceto stakeholders. Such systems enhance stakeholders’ ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally permissible interactions between AI systems and those impacted by their functioning, and two sufficient conditions for realizing the ideal of meaningful benefit. We then contrast this ideal with several salient failure modes, namely, forms of social interactions that constitute unjustified paternalism, coercion, deception, exploitation and domination. The proliferation of incidents involving AI in high-stakes domains underscores the gravity of these issues and the imperative to take an ethics-led approach to AI systems from their inception.more » « less
-
AI curricula are being developed and tested in classrooms, but wider adoption is premised by teacher professional development and buy-in. When engaging in professional development, curricula are treated as set in stone, static and educators are prepared to offer the curriculum as written instead of empowered to be lead- ers in efforts to spread and sustain AI education. This limits the degree to which teachers tailor new curricula to student needs and interests, ultimately distancing students from new and potentially relevant content. This paper describes an AI Educator Make-a-Thon, a two-day gathering of 34 educators from across the United States that centered co-design of AI literacy materials as the culminat- ing experience of a year-long professional development program called Everyday AI (EdAI) in which educators studied and prac- ticed implementing an innovative curriculum for Developing AI Literacy (DAILy) in their classrooms. Inspired by the energizing and empowering experiences of Hack-a-Thons, the Make-a-Thon was designed to increase the depth and longevity of the educators’ investment in AI education by positively impacting their sense of belonging to the AI community, AI content knowledge, and their self confidence as AI curriculum designers. In this paper we de- scribe the Make-a-Thon design, findings, and recommendations for future educator-centered Make-a-Thons.more » « less
An official website of the United States government

