The ultimate goal of using learning analytics dashboards is to improve teaching and learning processes. Instructors that use an analytics dashboard are presented with data about their students and/or about their teaching practices. Despite growing research in analytics dashboards, little is known about how instructors make sense of the data they receive and reflect on it. Moreover, there is limited evidence on how instructors who use these dashboards take further actions and improve their pedagogical practices. My dissertation work addresses these issues by examining instructors’ sense making, reflective practice and subsequent actions taken from classroom analytics in three phases: (a) problem analysis from systematic literature review (current), (b) implementation and examination of instructors’ sense-making and reflective practice (current) and (c) human-centered approaches to co-designing instructors’ dashboards with stakeholders (current). The findings will contribute to the conceptual basis of instructors’ change of their pedagogical practices and practical implications of human-centered principles in designing effective dashboards.
more »
« less
Towards strengthening links between learning analytics and assessment: Challenges and potentials of a promising new bond
Learning analytics uses large amounts of data about learner interactions in digital learning environments to understand and enhance learning. Although measurement is a central dimension of learning analytics, there has thus far been little research that examines links between learning analytics and assessment. This special issue of Computers in Human Behavior highlights 11 studies that explore how links between learning analytics and assessment can be strengthened. The contributions of these studies can be broadly grouped into three categories: analytics for assessment (learning analytic approaches as forms of assessment); analytics of assessment (applications of learning analytics to answer questions about assessment practices); and validity of measurement (conceptualization of and practical approaches to assuring validity in measurement in learning analytics). The findings of these studies highlight pressing scientific and practical challenges and opportunities in the connections between learning analytics and assessment that will require interdisciplinary teams to address: task design, analysis of learning progressions, trustworthiness, and fairness – to unlock the full potential of the links between learning analytics and assessment.
more »
« less
- Award ID(s):
- 2100320
- PAR ID:
- 10341756
- Date Published:
- Journal Name:
- Computers in human behavior
- Volume:
- 134
- ISSN:
- 0747-5632
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Learning analytics, referring to the measurement, collection, analysis, and reporting of data about learners and their contexts in order to optimize learning and the environments in which it occurs, is proving to be a powerful approach for understanding and improving science learning. However, few studies focused on leveraging learning analytics to assess hands-on laboratory skills in K-12 science classrooms. This study demonstrated the feasibility of gauging laboratory skills based on students’ process data logged by a mobile augmented reality (AR) application for conducting science experiments. Students can use the mobile AR technology to investigate a variety of science phenomena that involve concepts central to physics understanding. Seventy-two students from a suburban middle school in the Northeastern United States participated in this study. They conducted experiments in pairs. Mining process data using Bayesian networks showed that most students who participated in this study demonstrated some degree of proficiency in laboratory skills. Also, findings indicated a positive correlation between laboratory skills and conceptual learning. The results suggested that learning analytics provides a possible solution to measure hands-on laboratory learning in real-time and at scale.more » « less
-
Abstract Accurately representing changes in mental states over time is crucial for understanding their complex dynamics. However, there is little methodological research on the validity and reliability of human-produced continuous-time annotation of these states. We present a psychometric perspective on valid and reliable construct assessment, examine the robustness of interval-scale (e.g., values between zero and one) continuous-time annotation, and identify three major threats to validity and reliability in current approaches. We then propose a novel ground truth generation pipeline that combines emerging techniques for improving validity and robustness. We demonstrate its effectiveness in a case study involving crowd-sourced annotation of perceived violence in movies, where our pipeline achieves a .95 Spearman correlation in summarized ratings compared to a .15 baseline. These results suggest that highly accurate ground truth signals can be produced from continuous annotations using additional comparative annotation (e.g., a versus b) to correct structured errors, highlighting the need for a paradigm shift in robust construct measurement over time.more » « less
-
Benjamin, Paaßen; Carrie, Demmans Epp (Ed.)Although the fields of educational data mining and learning analytics have grown in terms of the analytic sophistication and breadth of applications, the impact on theory-building has been limited. To move these fields forward, studies should not only be driven by learning theory but also the analytics should be used to inform theory. In this paper, we present an approach for integrating educational data mining models with design-based research approaches to promote theory-building that is informed by data-based models. This approach aligns theory, design of the learning environment, data collection, and analytic methods through iterations that focus on the refinement and improvement of all these components. We provide an example from our own work which is driven by a critical constructionist learning framework, the design and development of a digital learning environment for elementary-school aged children to learn about artificial intelligence within sociopolitical contexts, and the use of epistemic network analysis as a tool for modeling learning. We conclude with how this approach can be reciprocally beneficial in that educational data miners can use their models to inform theory and learning scientists can augment their theory-building practices through big data models.more » « less
-
Problem-solving is a typical type of assessment in engineering dynamics tests. To solve a problem, students need to set up equations and find a numerical answer. Depending on its difficulty and complexity, it can take anywhere from ten to thirty minutes to solve a quantitative problem. Due to the time constraint of in-class testing, a typical test may only contain a limited number of problems, covering an insufficient range of problem types. This can potentially reduce validity and reliability, two crucial factors which contribute to assessment results. A test with high validity should cover proper content. It should be able to distinguish high-performing students from low-performing students and every student in between. A reliable test should have a sufficient number of items to provide consistent information about students’ mastery of the materials. In this work-in-progress study, we will investigate to what extent a newly developed assessment is valid and reliable. Symbolic problem solving in this study refers to solving problems by setting up a system of equations without finding numeric solutions. Such problems usually take much less time. As a result, we can include more problems of a variety of types in a test. We evaluate the new assessment's validity and reliability. The efficient approach focused in symbolic problem-solving allows for a diverse range of problems in a single test. We will follow Standards for Educational and Psychological Testing, referred to as the Standards, for our study. The Standards were developed jointly by three professional organizations including the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). We will use the standards to evaluate the content validity and internal consistency of a collection of symbolic problems. Examples on rectilinear kinematics and angular motion will be provided to illustrate how symbolic problem solving is used in both homework and assessments. Numerous studies in the literature have shown that symbolic questions impose greater challenges because of students’ algebraic difficulties. Thus, we will share strategies on how to prepare students to approach such problems.more » « less
An official website of the United States government

