skip to main content


Title: An Exploration of the Effects of Enterprise Social Media Use for Classroom Teams.
Cherchiglia et al. Effects of ESM Use for Classroom Teams Proceedings of the Nineteenth Annual Pre-ICIS Workshop on HCI Research in MIS, Virtual Conference, December 12, 2020 1 An Exploration of the Effects of Enterprise Social Media Use for Classroom Teams Leticia Cherchiglia Michigan State University leticia@msu.edu Wietske Van Osch HEC Montreal & Michigan State University wietske.van-osch@hec.ca Yuyang Liang Michigan State University liangyuy@msu.edu Elisavet Averkiadi Michigan State University averkiad@msu.edu ABSTRACT This paper explores the adoption of Microsoft Teams, a group-based Enterprise Social Media (ESM) tool, in the context of a hybrid Information Technology Management undergraduate course from a large midwestern university. With the primary goal of providing insights into the use and design of tools for group-based educational settings, we constructed a model to reflect our expectations that core ESM affordances would enhance students’ perceptions of Microsoft Teams’ functionality and efficiency, which in turn would increase both students’ perceptions of group productivity and students’ actual usage of Microsoft Teams for communication purposes. In our model we used three core ESM affordances from Treem and Leonardi (2013), namely editability (i.e., information can be created and/or edited after creation, usually in a collaborative fashion), persistence (i.e., information is stored permanently), and visibility (i.e., information is visible to other users). Analysis of quantitative (surveys, server-side; N=62) and qualitative (interviews; N=7) data led to intriguing results. It seems that although students considered that editability, persistency, and visibility affordances within Microsoft Teams were convenient functions of this ESM, problems when working collaboratively (such as connectivity, formatting, and searching glitches) might have prevented considerations of this ESM as fast and user-friendly (i.e., efficient). Moreover, although perceived functionality and efficiency were positively connected to group productivity, hidden/non-intuitive communication features within this ESM might help explain the surprising negative connection between efficiency and usage of this ESM for the purpose of group communication. Another explanation is that, given the plethora of competing tools specifically designed to afford seamless/optimal team communication, students preferred to use more familiar tools or tools perceived as more efficient for group communication than Microsoft Teams, a finding consistent with findings in organizational settings (Van Osch, Steinfield, and Balogh, 2015). Beyond theoretical contributions related to the impact that ESM affordances have on users’ interaction perceptions, and the impact of users’ interaction perceptions on team and system outcomes, from a strategic and practical point of view, our findings revealed several challenges for the use of Microsoft Teams (and perhaps ESM at large) in educational settings: 1) As the demand for online education grows, collaborative tools such as Microsoft Teams should strive to provide seamless experiences for multiple-user access to files and messages; 2) Microsoft Teams should improve its visual design in order to increase ease of use, user familiarity, and intuitiveness; 3) Microsoft Teams appears to have a high-learning curve, partially related to the fact that some features are hidden or take extra steps/clicks to be accessed, thus undermining their use; 4) Team communication is a complex topic which should be further studied because, given the choice, students will fall upon familiar tools therefore undermining the full potential for team collaboration through the ESM. We expect that this paper can provide insights for educators faced with the choice for an ESM tool best-suited for group-based classroom settings, as well as designers interested in adapting ESMs to educational contexts, which is a promising avenue for market expansion.  more » « less
Award ID(s):
1749018
NSF-PAR ID:
10299012
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
SIGHCI 2020 Proceedings
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This paper explores the adoption of a group-based Enterprise Social Media (ESM) tool (i.e., Microsoft Teams) in the context of a mid-sized undergraduate course in Information and Technology Management (ITM), thereby providing insights into the use and design of tools for group-based learning settings. The study used a mixed-methods approach—interviews, surveys, and server-side (i.e., objective) data—to investigate the effects of three core ESM affordances (i.e., editability, persistence, and visibility) on students’ perceptions of ESM functionality and efficiency, and in turn, on ESM-enabled perceived team productivity as well as the students’ level of system usage. Through leveraging a combination of qualitative and quantitative (both unobtrusive and self-reported) data, this paper aims to provide insights into the use of ESMs in group-based classrooms which is a theme of great importance given the need for high-quality online education experiences, especially during the current pandemic. 
    more » « less
  2. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  3. In this work-in-progress paper, we continue investigation into the propagation of the Concept Warehouse within mechanical engineering (Friedrichsen et al., 2017; Koretsky et al., 2019a). Even before the pandemic forced most instruction online, educational technology was a growing element in classroom culture (Koretsky & Magana, 2019b). However, adoption of technology tools for widespread use is often conceived from a turn-key lens, with professional development focused on procedural competencies and fidelity of implementation as the goal (Mills & Ragan, 2000; O’Donnell, 2008). Educators are given the tool with initial operating instructions, then left on their own to implement it in particular instructional contexts. There is little emphasis on the inevitable instructional decisions around incorporating the tool (Hodge, 2019) or on sustainable incorporation of technologies into existing instructional practice (Forkosh-Baruch et al., 2021). We consider the take-up of a technology tool as an emergent, rather than a prescribed process (Henderson et al., 2011). In this WIP paper, we examine how two instructors who we call Al and Joe reason through their adoption of a technology tool, focusing on interactions among instructors, tool, and students within and across contexts. The Concept Warehouse (CW) is a widely-available, web-based, open educational technology tool used to facilitate concept-based active learning in different contexts (Friedrichsen et al., 2017; Koretsky et al., 2014). Development of the CW is ongoing and collaboration-driven, where user-instructors from different institutions and disciplines can develop conceptual questions (called ConcepTests) and other learning and assessment tools that can be shared with other users. Currently there are around 3,500 ConcepTests, 1,500 faculty users, and 36,000 student users. About 700 ConcepTests have been developed for mechanics (statics and dynamics). The tool’s spectrum of affordances allows different entry points for instructor engagement, but also allows their use to grow and change as they become familiar with the tool and take up ideas from the contexts around them. Part of a larger study of propagation and use across five diverse institutions (Nolen & Koretsky, 2020), instructors were introduced to the tool, offered an introductory workshop and opportunity to participate in a community of practice (CoP), then interviewed early and later in their adoption. For this paper, we explore a bounded case study of the two instructors, Al and Joe, who took up the CW to teach Introductory Statics. Al and Joe were experienced instructors, committed to active learning, who presented examples from their ongoing adaptation of the tool for discussion in the community of practice. However, their decisions about how to integrate the tool fundamentally differed, including the aspects of the tool they took up and the ways they made sense of their use. In analyzing these two cases, we begin to uncover how these instructors navigated the dynamic nature of pedagogical decision making in and across contexts. 
    more » « less
  4. null (Ed.)
    Teamwork is at the heart of most organizations today. Given increased pressures for organizations to be flexible, and adaptable, teams are organizing in novel ways, using novel technologies to be increasingly agile. One of these technologies that are increasingly used by distributed teams is Enterprise Social Media (ESM): web-based applications utilized by organizations for enabling communication and collaboration between distributed employees. ESM feature unique affordances that facilitate collaboration, including interactions that are generative: group conversations that entail the creation of innovative concepts and resolutions. These types of interactions are an important attraction for companies deciding to implement ESM. There is a unique opportunity offered for researchers in the field of HCI to study such generative interactions, as all contributions to an ESM platform are made visible, and therefore are available for analysis. Our goal in this preliminary study is to understand the nature of group generative interactions through their linguistic indicators. In this study, we utilize data from an ESM platform used by a multinational organization. Using a 1% sub sample of all logged group interactions, we apply machine-learning to classify text as generative or non-generative and extract the linguistic antecedents for the classified generative content. Our results show a promising method for investigating the linguistic indicators of generative content and provide a proof of concept for investigating group interactions in unobtrusive ways. Additionally, our results would also be able to provide an analytics tool for managers to measure the extent to which text-based tools, such as ESM, effectively nudge employees towards generative behaviors. 
    more » « less
  5. Jeff Nichols (Ed.)
    Instructors using algorithmic team formation tools must decide which criteria (e.g., skills, demographics, etc.) to use to group students into teams based on their teamwork goals, and have many possible sources from which to draw these configurations (e.g., the literature, other faculty, their students, etc.). However, tools offer considerable flexibility and selecting ineffective configurations can lead to teams that do not collaborate successfully. Due to such tools’ relative novelty, there is currently little knowledge of how instructors choose which of these sources to utilize, how they relate different criteria to their goals for the planned teamwork, or how they determine if their configuration or the generated teams are successful. To close this gap, we conducted a survey (N=77) and interview (N=21) study of instructors using CATME Team-Maker and other criteria-based processes to investigate instructors’ goals and decisions when using team formation tools. The results showed that instructors prioritized students learning to work with diverse teammates and performed “sanity checks” on their formation approach’s output to ensure that the generated teams would support this goal, especially focusing on criteria like gender and race. However, they sometimes struggled to relate their educational goals to specific settings in the tool. In general, they also did not solicit any input from students when configuring the tool, despite acknowledging that this information might be useful. By opening the “black box” of the algorithm to students, more learner-centered approaches to forming teams could therefore be a promising way to provide more support to instructors configuring algorithmic tools while at the same time supporting student agency and learning about teamwork. 
    more » « less