Title: An Investigation of Misinformation Harms Related to Social Media During Humanitarian Crises
During humanitarian crises, people face dangers and need a large amount of information in a short period of time. Such need creates the base for misinformation such as rumors, fake news or hoaxes to spread within and outside the affected community. It could be unintended misinformation with unconfirmed details, or intentional disinformation created to trick people for benefits. It results in information harms that can generate serious short term or long-term consequences. Although some researchers have created misinformation detection systems and algorithms, examined the roles of involved parties, examined the way misinformation spreads and convinces people, very little attention has been paid to the types of misinformation harms. In the context of humanitarian crises, we propose a taxonomy of information harms and assess people’s perception of risk regarding the harms. Such a taxonomy can act as the base for future research to quantitatively measure the harms in specific contexts. Furthermore, perceptions of related people were also investigated in four specifically chosen scenarios through two dimensions: Likelihood of occurrence and Level of impacts of the harms. more »« less
Misinformation about the coronavirus disease of 2019 (COVID-19) health crisis has been widespread on social media and caused various types of harms in society. While some researchers have investigated the way in which people perceive misinformation harm in crises, little research has systematically examined harms from health-related misinformation. In order to address this gap, we focus on non-comparative and comparative harm perceptions of the affected community in the COVID-19 pandemic context. We examine non-comparative harms (which component harms and contextual harms reflect) and comparative harms (which counter-contextual harms reflect) in order to understand harm perceptions. We also investigate how harm perception varies based on COVID-19 victimization experience. We used a professional survey company named Cint to collect data using a scenario-based survey with 343 participants. We extract various findings such as how contextual features shape perceived harms and reveal the scenarios in which COVID-19 victims perceive higher contextual harms but lower counter-contextual harms. We also examine how corrective actions of social media shape harm perceptions.
During COVID-19, misinformation on social media affects the adoption of appropriate prevention behaviors. It is urgent to suppress the misinformation to prevent negative public health consequences. Although an array of studies has proposed misinformation suppression strategies, few have investigated the role of predominant credible information during crises. None has examined its effect quantitatively using longitudinal social media data. Therefore, this research investigates the temporal correlations between credible information and misinformation, and whether predominant credible information can suppress misinformation for two prevention measures (i.e. topics), i.e. wearing masks and social distancing using tweets collected from February 15 to June 30, 2020. We trained Support Vector Machine classifiers to retrieve relevant tweets and classify tweets containing credible information and misinformation for each topic. Based on cross-correlation analyses of credible and misinformation time series for both topics, we find that the previously predominant credible information can lead to the decrease of misinformation (i.e. suppression) with a time lag. The research findings provide empirical evidence for suppressing misinformation with credible information in complex online environments and suggest practical strategies for future information management during crises and emergencies.
Clark, David D.; claffy, kc
(, Social Science Research Network)
One foundational justification for regulatory intervention is that there are harms occurring of a character that create a public interest in mitigating them. This paper is concerned with such harms that arise in the Internet ecosystem. Looking at news headlines for the last few years, it may seem that the range of such harms is unbounded. Hoping to add some order to the chaos, we undertake an effort to classify harms in the Internet ecosystem, in pursuit of a more or less complete taxonomy of harms. Our goal in structuring this taxonomy can help to mitigate harms in a more systematic way, as opposed to fighting an endless defensive battle against whatever happens next. The background we bring to this paper is on the one hand architectural—how the Internet ecosystem is actually structured—and on the other hand empirical—how we should measure the Internet to best understand what is happening. If everything were wonderful about the Internet today, the need to measure and understand would not be so compelling. A justification for measurement follows from its ability to shed light on problems and challenges. Sustained measurement or compelled reporting of data, and the analysis of the collected data, generally comes at considerable effort and cost, so must be justified by an argument that it will shed light on something important. This reasoning naturally motivates our taxonomy of things that are wrong—what we call harms. That is where we, the research community generally, and governments should focus attention. We do not intend this paper as a catalog of pessimism, but to help define an action agenda for the research community and for governments. The structure of the paper proceeds "up the layers'', from technology to society. For harms that are closer to the technology, we can be more specific about the harms, and more specific about possible measurements and remedies, and actors that could undertake them. One motivation for this paper is that we believe the Internet ecosystem is at an inflection point. The Internet has revolutionized our ability to store, move, and process information, including information about people, and we are only at the beginning of understanding its impact on society and how to manage and mitigate harms resulting from unregulated commercial use of these capabilities. Current events suggest that now is a point of transition from laissez-faire to regulation. However, the path to good regulation is not obvious, and now is the time for the research community to think hard about what advice to give the governments of the world, and what sort of data can back up that advice. Our highest-level goal for this paper is to contribute to a conversation along those lines.
Abstract Personal mobility data from mobile phones and other sensors are increasingly used to inform policymaking during pandemics, natural disasters, and other humanitarian crises. However, even aggregated mobility traces can reveal private information about individual movements to potentially malicious actors. This paper develops and tests an approach for releasing private mobility data, which provides formal guarantees over the privacy of the underlying subjects. Specifically, we (1) introduce an algorithm for constructing differentially private mobility matrices and derive privacy and accuracy bounds on this algorithm; (2) use real-world data from mobile phone operators in Afghanistan and Rwanda to show how this algorithm can enable the use of private mobility data in two high-stakes policy decisions: pandemic response and the distribution of humanitarian aid; and (3) discuss practical decisions that need to be made when implementing this approach, such as how to optimally balance privacy and accuracy. Taken together, these results can help enable the responsible use of private mobility data in humanitarian response.
Gawronski, Bertram; Nahon, Lea_S; Ng, Nyx_L
(, Current Directions in Psychological Science)
Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs.
Tran T., Valecha R. An Investigation of Misinformation Harms Related to Social Media During Humanitarian Crises. Retrieved from https://par.nsf.gov/biblio/10196223. Secure Knowledge Management In Artificial Intelligence Era. . Web. doi:10.1007/978-981-15-3817-9_10.
Tran T., Valecha R. An Investigation of Misinformation Harms Related to Social Media During Humanitarian Crises. Secure Knowledge Management In Artificial Intelligence Era., (). Retrieved from https://par.nsf.gov/biblio/10196223. https://doi.org/10.1007/978-981-15-3817-9_10
@article{osti_10196223,
place = {Country unknown/Code not available},
title = {An Investigation of Misinformation Harms Related to Social Media During Humanitarian Crises},
url = {https://par.nsf.gov/biblio/10196223},
DOI = {10.1007/978-981-15-3817-9_10},
abstractNote = {During humanitarian crises, people face dangers and need a large amount of information in a short period of time. Such need creates the base for misinformation such as rumors, fake news or hoaxes to spread within and outside the affected community. It could be unintended misinformation with unconfirmed details, or intentional disinformation created to trick people for benefits. It results in information harms that can generate serious short term or long-term consequences. Although some researchers have created misinformation detection systems and algorithms, examined the roles of involved parties, examined the way misinformation spreads and convinces people, very little attention has been paid to the types of misinformation harms. In the context of humanitarian crises, we propose a taxonomy of information harms and assess people’s perception of risk regarding the harms. Such a taxonomy can act as the base for future research to quantitatively measure the harms in specific contexts. Furthermore, perceptions of related people were also investigated in four specifically chosen scenarios through two dimensions: Likelihood of occurrence and Level of impacts of the harms.},
journal = {Secure Knowledge Management In Artificial Intelligence Era.},
author = {Tran T., Valecha R.},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.