skip to main content


Title: Gaps in Information Access in Social Networks
The study of influence maximization in social networks has largely ignored disparate effects these algorithms might have on the individuals contained in the social network. Individuals may place a high value on receiving information, e.g. job openings or advertisements for loans. While well-connected individuals at the center of the network are likely to receive the information that is being distributed through the network, poorly connected individuals are systematically less likely to receive the information, producing a gap in access to the information between individuals. In this work, we study how best to spread information in a social network while minimizing this access gap. We propose to use the maximin social welfare function as an objective function, where we maximize the minimum probability of receiving the information under an intervention. We prove that in this setting this welfare function constrains the access gap whereas maximizing the expected number of nodes reached does not. We also investigate the difficulties of using the maximin, and present hardness results and analysis for standard greedy strategies. Finally, we investigate practical ways of optimizing for the maximin, and give empirical evidence that a simple greedy-based strategy works well in practice.  more » « less
Award ID(s):
1633387 1633400 1633724
NSF-PAR ID:
10120545
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
The Web Conference (WWW)
Page Range / eLocation ID:
480 to 490
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Diffusion of information in social network has been the focus of intense research in the recent past decades due to its significant impact in shaping public discourse through group/individual influence. Existing research primarily models influence as a binary property of entities: influenced or not influenced. While this is a useful abstraction, it discards the notion of degree of influence, i.e., certain individuals may be influenced ``more'' than others. We introduce the notion of \emph{attitude}, which, as described in social psychology, is the degree by which an entity is influenced by the information. Intuitively, attitude captures the number of distinct neighbors of an entity influencing the latter. We present an information diffusion model (AIC model) that quantifies the degree of influence, i.e., attitude of individuals, in a social network. With this model, we formulate and study attitude maximization problem. We prove that the function for computing attitude is monotonic and sub-modular, and the attitude maximization problem is NP-Hard. We present a greedy algorithm for maximization with an approximation guarantee of $(1-1/e)$. In the context of AIC model, we study two problems, with the aim to investigate the scenarios where attaining individuals with high attitude is objectively more important than maximizing the attitude of the entire network. In the first problem, we introduce the notion of \emph{actionable attitude}; intuitively, individuals with actionable attitude are likely to ``act'' on their attained attitude. We show that the function for computing actionable attitude, unlike that for computing attitude, is non-submodular and however is \emph{approximately submodular}. We present approximation algorithm for maximizing actionable attitude in a network. In the second problem, we consider identifying the number of individuals in the network with attitude above a certain value, a threshold. In this context, the function for computing the number of individuals with attitude above a given threshold induced by a seed set is \emph{neither submodular nor supermodular}. We present heuristics for realizing the solution to the problem. We experimentally evaluated our algorithms and studied empirical properties of the attitude of nodes in network such as spatial and value distribution of high attitude nodes. 
    more » « less
  2. null (Ed.)
    In many societal resource allocation domains, machine learn- ing methods are increasingly used to either score or rank agents in order to decide which ones should receive either resources (e.g., homeless services) or scrutiny (e.g., child welfare investigations) from social services agencies. An agency’s scoring function typically operates on a feature vector that contains a combination of self-reported features and information available to the agency about individuals or households. This can create incentives for agents to misrepresent their self-reported features in order to receive resources or avoid scrutiny, but agencies may be able to selectively au- dit agents to verify the veracity of their reports. We study the problem of optimal auditing of agents in such settings. When decisions are made using a threshold on an agent’s score, the optimal audit policy has a surprisingly simple structure, uniformly auditing all agents who could benefit from lying. While this policy can, in general be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable. We show that the scarce resource setting is more difficult, and exhibit an approximately optimal audit policy in this case. In addition, we show that in either setting verifying whether it is possible to incentivize exact truthfulness is hard even to approximate. However, we also exhibit sufficient conditions for solving this problem optimally, and for obtaining good approximations. 
    more » « less
  3. In many societal resource allocation domains, machine learning methods are increasingly used to either score or rank agents in order to decide which ones should receive either resources (e.g., homeless services) or scrutiny (e.g., child welfare investigations) from social services agencies. An agency’s scoring function typically operates on a feature vector that contains a combination of self-reported features and information available to the agency about individuals or households. This can create incentives for agents to misrepresent their self-reported features in order to receive resources or avoid scrutiny, but agencies may be able to selectively audit agents to verify the veracity of their reports. We study the problem of optimal auditing of agents in such settings. When decisions are made using a threshold on an agent’s score, the optimal audit policy has a surprisingly simple structure, uniformly auditing all agents who could benefit from lying. While this policy can, in general be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable. We show that the scarce resource setting is more difficult, and exhibit an approximately optimal audit policy in this case. In addition, we show that in either setting verifying whether it is possible to incentivize exact truthfulness is hard even to approximate. However, we also exhibit sufficient conditions for solving this problem optimally, and for obtaining good approximations. 
    more » « less
  4. Abstract

    The drive to broaden equitable access to undergraduate research experiences has catalyzed the development and implementation of course‐based undergraduate research experiences (CUREs). Biology education has prioritized embedding CUREs in introductory labs, which are frequently taught by graduate teaching assistants (GTAs). Thus, a CURE GTA is expected not only to teach but also to support novice student researchers. We know little about how GTAs perform as research mentors in a CURE, or how the quality of their mentorship and support impacts undergraduate students. To address this gap in knowledge, we conducted a phenomenological study of an introductory biology CURE, interviewing 25 undergraduate students taught by nine different GTAs at a single institution. We used self‐determination theory to guide our exploration of how students' autonomous motivation to engage in a CURE is impacted by perceptions of GTA support. We found that highly motivated students were more likely to experience factors hypothesized to optimize motivation in the CURE, and to perceive that their GTA was highly supportive of these elements. Students with lower motivation were less likely to report engaging in fundamental elements of research offered in a CURE. Our findings suggest that GTAs directly impact students' motivation, which can, in turn, influence whether students perceive receiving the full research experience as intended in a CURE. We contend that practitioners who coordinate CUREs led by GTAs should therefore offer curated training that emphasizes supporting students' autonomous motivation in the course and engagement in the research. Our work suggests that GTAs may differ in their capacity to provide students with the support they need to receive and benefit from certain pedagogical practices. Future work assessing innovative approaches in undergraduate biology laboratory courses should continue to investigate potenital differential outcomes for students taught by GTAs.

     
    more » « less
  5. Importance

    The COVID-19 pandemic has been notable for the widespread dissemination of misinformation regarding the virus and appropriate treatment.

    Objective

    To quantify the prevalence of non–evidence-based treatment for COVID-19 in the US and the association between such treatment and endorsement of misinformation as well as lack of trust in physicians and scientists.

    Design, Setting, and Participants

    This single-wave, population-based, nonprobability internet survey study was conducted between December 22, 2022, and January 16, 2023, in US residents 18 years or older who reported prior COVID-19 infection.

    Main Outcome and Measure

    Self-reported use of ivermectin or hydroxychloroquine, endorsing false statements related to COVID-19 vaccination, self-reported trust in various institutions, conspiratorial thinking measured by the American Conspiracy Thinking Scale, and news sources.

    Results

    A total of 13 438 individuals (mean [SD] age, 42.7 [16.1] years; 9150 [68.1%] female and 4288 [31.9%] male) who reported prior COVID-19 infection were included in this study. In this cohort, 799 (5.9%) reported prior use of hydroxychloroquine (527 [3.9%]) or ivermectin (440 [3.3%]). In regression models including sociodemographic features as well as political affiliation, those who endorsed at least 1 item of COVID-19 vaccine misinformation were more likely to receive non–evidence-based medication (adjusted odds ratio [OR], 2.86; 95% CI, 2.28-3.58). Those reporting trust in physicians and hospitals (adjusted OR, 0.74; 95% CI, 0.56-0.98) and in scientists (adjusted OR, 0.63; 95% CI, 0.51-0.79) were less likely to receive non–evidence-based medication. Respondents reporting trust in social media (adjusted OR, 2.39; 95% CI, 2.00-2.87) and in Donald Trump (adjusted OR, 2.97; 95% CI, 2.34-3.78) were more likely to have taken non–evidence-based medication. Individuals with greater scores on the American Conspiracy Thinking Scale were more likely to have received non–evidence-based medications (unadjusted OR, 1.09; 95% CI, 1.06-1.11; adjusted OR, 1.10; 95% CI, 1.07-1.13).

    Conclusions and Relevance

    In this survey study of US adults, endorsement of misinformation about the COVID-19 pandemic, lack of trust in physicians or scientists, conspiracy-mindedness, and the nature of news sources were associated with receiving non–evidence-based treatment for COVID-19. These results suggest that the potential harms of misinformation may extend to the use of ineffective and potentially toxic treatments in addition to avoidance of health-promoting behaviors.

     
    more » « less