Significant gaps remain between public opinion and the scientific consensus on many issues. We present the results of three studies ( N = 722 in total) for the development and testing of a novel instrument to measure a largely unmeasured aspect of scientific literacy: the enterprise of science, particularly in the context of its social structures. We posit that this understanding of the scientific enterprise is an important source for the public’s trust in science. Our results indicate that the Social Enterprise of Science Index (SESI) is a reliable and valid instrument that correlates positively with trust in science ( r = .256, p < .001), and level of education ( r = .245, p < .001). We also develop and validate a six question short version of the SESI for ease of use in longer surveys.
more »
« less
Trust of Science as a Public Collective Good
The COVID-19 pandemic and global climate change crisis remind us that widespread trust in the products of the scientific enterprise is vital to the health and safety of the global community. Insofar as appropriate responses to these (and other) crises require us to trust that enterprise, cultivating a healthier trust relationship between science and the public may be considered as a collective public good. While it might appear that scientists can contribute to this good by taking more initiative to communicate their work to public audiences, we raise a concern about unintended consequences of an individualistic approach to such communication.
more »
« less
- Award ID(s):
- 1734616
- PAR ID:
- 10489120
- Publisher / Repository:
- The Philosophy of Science Association
- Date Published:
- Journal Name:
- Philosophy of Science
- Volume:
- 89
- Issue:
- 5
- ISSN:
- 0031-8248
- Page Range / eLocation ID:
- 1044 to 1053
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Regulatory agencies aim to protect the public by moderating risks associated with innovation, but a good regulatory regime should also promote justified public trust. After introducing the USDA 2020 SECURE Rule for regulation of biotech innovation as a case study, this essay develops a theory of justified public trust in regulation. On the theory advanced here, to be trustworthy, a regulatory regime must (1) fairly and effectively manage risk, must be (2) “science based” in the relevant sense, and must in addition be (3) truthful, (4) transparent, and (5) responsive to public input. Evaluated with these norms, the USDA SECURE Rule is shown to be deeply flawed, since it fails appropriately to manage risk, and similarly fails to satisfy other normative requirements for justified trust. The argument identifies ways in which the SECURE Rule itself might be improved, but more broadly provides a normative framework for the evaluation of trustworthy regulatory policy-making.more » « less
-
The foundations of Artificial Intelligence (AI), a field whose applications are of great use and concern for society, can be traced back to the early years of the second half of the 20th century. Since then, the field has seen increased research output and funding cycles followed by setbacks. The new millennium has seen unprecedented interest in AI progress and expectations with significant financial investments from the public and private sectors. However, the continual acceleration of AI capabilities and real-world applications is not guaranteed. Mainly, accountability of AI systems in the context of the interplay between AI and the broader society is essential for adopting AI systems via the trust placed in them. Continual progress in AI research and development (R&D) can help tackle humanity's most significant challenges to improve social good. The authors of this paper suggest that the careful design of forward-looking research policies serves a crucial function in avoiding potential future setbacks in AI research, development, and use. The United States (US) has kept its leading role in R&D, mainly shaping the global trends in the field. Accordingly, this paper presents a critical assessment of the US National AI R&D Strategic Plan and prescribes six recommendations to improve future research strategies in the US and around the globe.more » « less
-
ImportanceTrust in physicians and hospitals has been associated with achieving public health goals, but the increasing politicization of public health policies during the COVID-19 pandemic may have adversely affected such trust. ObjectiveTo characterize changes in US adults’ trust in physicians and hospitals over the course of the COVID-19 pandemic and the association between this trust and health-related behaviors. Design, Setting, and ParticipantsThis survey study uses data from 24 waves of a nonprobability internet survey conducted between April 1, 2020, and January 31, 2024, among 443 455 unique respondents aged 18 years or older residing in the US, with state-level representative quotas for race and ethnicity, age, and gender. Main Outcome and MeasureSelf-report of trust in physicians and hospitals; self-report of SARS-CoV-2 and influenza vaccination and booster status. Survey-weighted regression models were applied to examine associations between sociodemographic features and trust and between trust and health behaviors. ResultsThe combined data included 582 634 responses across 24 survey waves, reflecting 443 455 unique respondents. The unweighted mean (SD) age was 43.3 (16.6) years; 288 186 respondents (65.0%) reported female gender; 21 957 (5.0%) identified as Asian American, 49 428 (11.1%) as Black, 38 423 (8.7%) as Hispanic, 3138 (0.7%) as Native American, 5598 (1.3%) as Pacific Islander, 315 278 (71.1%) as White, and 9633 (2.2%) as other race and ethnicity (those who selected “Other” from a checklist). Overall, the proportion of adults reporting a lot of trust for physicians and hospitals decreased from 71.5% (95% CI, 70.7%-72.2%) in April 2020 to 40.1% (95% CI, 39.4%-40.7%) in January 2024. In regression models, features associated with lower trust as of spring and summer 2023 included being 25 to 64 years of age, female gender, lower educational level, lower income, Black race, and living in a rural setting. These associations persisted even after controlling for partisanship. In turn, greater trust was associated with greater likelihood of vaccination for SARS-CoV-2 (adjusted odds ratio [OR], 4.94; 95 CI, 4.21-5.80) or influenza (adjusted OR, 5.09; 95 CI, 3.93-6.59) and receiving a SARS-CoV-2 booster (adjusted OR, 3.62; 95 CI, 2.99-4.38). Conclusions and RelevanceThis survey study of US adults suggests that trust in physicians and hospitals decreased during the COVID-19 pandemic. As lower levels of trust were associated with lesser likelihood of pursuing vaccination, restoring trust may represent a public health imperative.more » « less
-
The objective of this paper is to establish the fundamental public value principles that should govern safe and trusted artificial intelligence (AI). Public value is a dynamic concept that encompasses several dimensions. AI itself has evolved quite rapidly in the last few years, especially with the swift escalation of Generative AI. Governments around the world are grappling with how to govern AI, just as technologists ring alarm bells about the future consequences of AI. Our paper extends the debate on AI governance that is focused on ethical values of beneficence to that of economic values of public good. Viewed as a public good, AI use is beyond the control of the creators. Towards this end, the paper examined AI policies in the United States and Europe. We postulate three principles from a public values perspective: (i) ensuring security and privacy of each individual (or entity); (ii) ensuring trust in AI systems is verifiable; and (iii) ensuring fair and balanced AI protocols, wherein the underlying components of data and algorithms are contestable and open to public debate.more » « less
An official website of the United States government

