skip to main content

Title: Who Uses Bots? A Statistical Analysis of Bot Usage in Moderation Teams
Adopting new technology is challenging for volunteer moderation teams of online communities. Challenges are aggravated when communities increase in size. In a prior qualitative study, Kiene et al. found evidence that moderator teams adapted to challenges by relying on their experience in other technological platforms to guide the creation and adoption of innovative custom moderation "bots." In this study, we test three hypotheses on the social correlates of user innovated bot usage drawn from a previous qualitative study. We find strong evidence of the proposed relationship between community size and the use of user innovated bots. Although previous work suggests that smaller teams of moderators will be more likely to use these bots and that users with experience moderating in the previous platform will be more likely to do so, we find little evidence in support of either proposition.
Authors:
;
Award ID(s):
1617129
Publication Date:
NSF-PAR ID:
10220251
Journal Name:
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
Page Range or eLocation-ID:
1 to 8
Sponsoring Org:
National Science Foundation
More Like this
  1. The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert datamore »scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.« less
  2. Online social communities are becoming windows for learning more about the health of populations, through information about our health-related behaviors and outcomes from daily life. At the same time, just as public health data and theory has shown that aspects of the built environment can affect our health-related behaviors and outcomes, it is also possible that online social environments (e.g., posts and other attributes of our online social networks) can also shape facets of our life. Given the important role of the online environment in public health research and implications, factors which contribute to the generation of such data must be well understood. Here we study the role of the built and online social environments in the expression of dining on Instagram in Abu Dhabi; a ubiquitous social media platform, city with a vibrant dining culture, and a topic (food posts) which has been studied in relation to public health outcomes. Our study uses available data on user Instagram profiles and their Instagram networks, as well as the local food environment measured through the dining types (e.g., casual dining restaurants, food court restaurants, lounges etc.) by neighborhood. We find evidence that factors of the online social environment (profiles that postmore »about dining versus profiles that do not post about dining) have different influences on the relationship between a user’s built environment and the social dining expression, with effects also varying by dining types in the environment and time of day. We examine the mechanism of the relationships via moderation and mediation analyses. Overall, this study provides evidence that the interplay of online and built environments depend on attributes of said environments and can also vary by time of day. We discuss implications of this synergy for precisely-targeting public health interventions, as well as on using online data for public health research.« less
  3. A considerable portion of the US population still lacks access to technology, which causes challenges for marginalized communities to access information and services. Research on the digital divide exists in various contexts, but few have examined it in the context of human services. This study examines the impact of socioeconomic status on the methods of communication used when searching for service-related information. We analyzed both quantitative and qualitative data collected from 63 low-income and/or current human service users in Albany, New York. Education showed positive associations with smartphone ownership and personal computer use. Income was found only significant for tablet use. Non-whites were more likely to use mobile apps to web browsers compared to whites. Qualitative analysis revealed three key themes (i.e., availability, ease of use, and usefulness) as influencers of individual preference of methods. Our findings suggest that the digital divide is not merely about the income level but also educational background and culture. Human service professionals need to consider multiple channels to reach targeted populations for service delivery. Particularly, the collaboration between service providers and public libraries is worth examining to ensure the physical access and skills training for those who experience the digital divide at multiple levels.
  4. Makerspaces have observed and speculated benefits for the students who frequent them. For example, previous studies have found that students who are involved in their campus’s makerspace tend to be more confident and less anxious when conducting engineering design tasks while gaining hands-on experience with machinery not obtained in their coursework. Recognizing the potential benefits of academic makerspaces, we aimed to capture what influences students to become involved in these spaces through a mixed-method study. A quantitative longitudinal study of students in a mechanical engineering program collected data on design self-efficacy, makerspace involvement, and user demographics through surveys conducted on freshmen, sophomores, and seniors. In this paper, the student responses from three semesters of freshmen level design classes are evaluated for involvement and self-efficacy based on whether or not a 3D modeling project requires the use of makerspace equipment. The study finds that students required to use the makerspace for the project were significantly more likely to become involved in the makerspace. These results inspired us to integrate a qualitative approach to examine how student involvement and exposure to the space are related. Using an in-depth phenomenologically based interviewing method, purposive sampling, and snowball sampling, six females, who have allmore »made the conscious decision to engage in a university makerspace(s), participated in a three-series interview process. The interviews were transcribed and analyzed via emerging questions for categorical metrics and infographics of the student exposure and involvement in making and makerspaces. These findings are used to demonstrate 1) how students who do, or do not, seek out making activities may end up in the makerspace and 2) how student narratives resulting in high-makerspace involvement are impacted by prior experiences, classes, and friendships.« less
  5. Only a limited number of studies have explored the effects of cumulative disaster exposure—defined here as multiple, acute onset, large-scale collective events that cause disruption for individuals, families, and entire communities. Research that is available indicates that children and adults who experience these potentially traumatic community-level events are at greater risk of a variety of negative health outcomes and ongoing secondary stressors throughout their life course. The present study draws on in-depth interviews with a qualitative subsample of nine mother-child pairs who were identified as both statistical and theoretical outliers in terms of their levels of disaster exposure through their participation in a larger, longitudinal Women and Their Children’s Health (WaTCH) project that was conducted following the British Petroleum Deepwater Horizon Oil Spill. During Wave 2 of the WaTCH study, mothers and their children were asked survey questions about previous exposure to and the impacts of the oil spill, hurricanes, and other disasters. This article presents the qualitative interview data collected from the subsample of children and mothers who both endorsed that they had experienced three or more disasters that had a major impact on the child and the household. We refer to these children as exposure outliers. The in-depthmore »narratives of the four mother-child pairs who told stories of multiple pre-disaster stressors emerging from structural inequalities and health and financial problems, protracted and unstable displacements, and high levels of material and social losses illustrate how problems can pile up to slow or completely hinder individual and family disaster recovery. These four mother-child pairs were especially likely to have experienced devastating losses in Hurricane Katrina in 2005, which then led to an accumulation of disadvantage and ongoing cycles of loss and disruption. The stories of the remaining five mother-child pairs underscore how pre-disaster resources, post-disaster support, and institutional stabilizing forces can accelerate recovery even after multiple disaster exposures. This study offers insights about how families can begin to prepare for a future that is likely to be increasingly punctuated by more frequent and intense extreme weather events and other types of disaster.« less