Abstract What were the impacts of the Covid‐19 pandemic on trust in public health information, and what can be done to rebuild trust in public health authorities? This essay synthesizes insights from science and technology studies, information studies, and bioethics to explore sociotechnical factors that may have contributed to the breakdown of trust in public health information during the Covid‐19 pandemic. The field of science and technology studies lays out the dynamic nature of facts, helping to explain rapid shifts in public health messaging during Covid‐19 and reasons they produced a lack of trust in public health authorities. The information field looks at how facts are sociotechnically constructed through systems of classification, illustrating how extrascientific factors influence public health authorities. Putting these perspectives alongside bioethics principles raises additional factors to consider. The goal of this essay is to learn from past failures to point toward a brighter future where trust in public health authorities can be rebuilt, not on faith, but rather through striving for calibrated trust within which, through a virtuous circle, trust is validated.
more »
« less
Reporting on Science as an Ongoing Process (or Not)
Efforts to cultivate scientific literacy in the public are often aimed at enabling people to make more informed decisions — both in their own lives (e.g., personal health, sustainable practices, &c.) and in the public sphere. Implicit in such efforts is the cultivation of some measure oftrustof science. To what extent does science reporting in mainstream newspapers contribute to these goals? Is what is reported likely to improve the public's understanding of science as a process for generating reliable knowledge? What are its likely effects on public trust of science? In this paper, we describe a content analysis of 163 instances of science reporting in three prominent newspapers from three years in the last decade. The dominant focus, we found, was on particular outcomes of cutting-edge science; it was comparatively rare for articles to attend to the methodology or the social–institutional processes by which particular results come about. At best, we argue that this represents a missed opportunity.
more »
« less
- Award ID(s):
- 1734616
- PAR ID:
- 10489118
- Publisher / Repository:
- Frontiers Media SA
- Date Published:
- Journal Name:
- Frontiers in Communication
- Volume:
- 5
- ISSN:
- 2297-900X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract If contextual values can play necessary and beneficial roles in scientific research, to what extent should science communicators be transparent about such values? This question is particularly pressing in contexts where there appears to be significant resistance among some non-experts to accept certain scientific claims or adopt science-based policies or recommendations. This paper examines whether value transparency can help promote non-experts’ warranted epistemic trust of experts. I argue that there is a prima facie case in favor of transparency because it can promote four conditions that are thought to be required for epistemic trustworthiness. I then consider three main arguments that transparency about values is likely to be ineffective in promoting such trust (and may undermine it). This analysis shows that while these arguments show that value transparency is not sufficient for promoting epistemic trust, they fail to show that rejecting value transparency as a norm for science communicators is more likely to promote warranted epistemic trust than a qualified norm of value transparency (along with other strategies). Finally, I endorse a tempered understanding of value transparency and consider what this might require in practice.more » « less
-
As artificial intelligence (AI) profoundly reshapes our personal and professional lives, there are growing calls to support pre-college aged youth as they develop capacity to engage critically and productively with AI. While efforts to introduce AI concepts to pre-college aged youth have largely focused on older teens, there is growing recognition of the importance of developing AI literacy among younger children. Today’s youth already encounter and use AI regularly, but they might not yet be aware of its role, limitations, risks, or purpose in a particular encounter, and may not be positioned to question whether it should be doing what it’s doing. In response to this critical moment to develop AI learning experiences that can support children at this age, researchers and learning designers at the University of California’s Lawrence Hall of Science, in collaboration with AI developers at the University of Southern California’s Institute for Creative Technologies, have been iteratively developing and studying a series of interactive learning experiences for public science centers and similar out-of-school settings. The project is funded through a grant by the National Science Foundation and the resulting exhibit, The Virtually Human Experience (VHX), represents one of the first interactive museum exhibits in the United States designed explicitly to support young children and their families in developing understanding of AI. The coordinated experiences in VHX include both digital (computer-based) and non-digital (“unplugged”) activities designed to engage children (ages 7-12) and their families in learning about AI. In this paper, we describe emerging insights from a series of case studies that track small groups of museum visitors (e.g. a parent and two children) as they interact with the exhibit. The case studies reveal opportunities and challenges associated with designing AI learning experiences for young children in a free-choice environment like a public science center. In particular, we focus on three themes emerging from our analyses of case data: 1) relationships between design elements and collaborative discourse within intergenerational groups (i.e., families and other adult-child pairings); 2) relationships between design elements and impromptu visitor experimentation within the exhibit space; and 3) challenges in designing activities with a low threshold for initial engagement such that even the youngest visitors can engage meaningfully with the activity. Findings from this study are directly relevant to support researchers and learning designers engaged in rapidly expanding efforts to develop AI learning opportunities for youth, and are likely to be of interest to a broad range of researchers, designers, and practitioners as society encounters this transformative technology and its applications become increasingly integral to how we live and work.more » « less
-
Microplastics have been found in the most remote locations on Earth as well as in where we live, work, and play. Despite increasing research focus on microplastics, efforts to inform the public about their omnipresence have lagged. To bridge this gap between research and public knowledge, we developed a museum exhibit with interactive and informative displays that explain what microplastics are, how they are formed, where they are found, and what individuals can do about it. In a partnership between researchers at the University of Michigan (Ann Arbor) and staff at the Dossin Great Lakes Museum (Detroit), the exhibit highlights the impacts of microplastic pollution in the region. Collected survey data revealed that museum visitors were aware of microplastic pollution and are worried about it, that they felt the museum exhibit was helpful and informative, and that they are likely to take simple actions to decrease microplastic pollution.more » « less
-
Mahmoud, Ali B. (Ed.)Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.more » « less
An official website of the United States government

