skip to main content


Title: A Standard Decomposition Process to Inform the Development of Game-Based Learning Environments Focused on Computational Thinking
This study describes a standard decomposition process, which is designed to decompose content standards into observable components that might illustrate computational thinking skills. These components will be integrated into an online game-based learning environment as evidence of learning (EoL) and mastery (EoM). Focusing on three computer science standards, we describe how the standard decomposition process was used to generate standard decomposition tables. We show samples of the content of these decomposition tables and describe how these tables evolved based on educator feedback.  more » « less
Award ID(s):
1933848
NSF-PAR ID:
10353043
Author(s) / Creator(s):
Date Published:
Journal Name:
Fifth APSCE International Conference on Computational Thinking and STEM Education 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, I. ; Selesnick, I. (Ed.)
    The Neural Engineering Data Consortium at Temple University has been providing key data resources to support the development of deep learning technology for electroencephalography (EEG) applications [1-4] since 2012. We currently have over 1,700 subscribers to our resources and have been providing data, software and documentation from our web site [5] since 2012. In this poster, we introduce additions to our resources that have been developed within the past year to facilitate software development and big data machine learning research. Major resources released in 2019 include: ● Data: The most current release of our open source EEG data is v1.2.0 of TUH EEG and includes the addition of 3,874 sessions and 1,960 patients from mid-2015 through 2016. ● Software: We have recently released a package, PyStream, that demonstrates how to correctly read an EDF file and access samples of the signal. This software demonstrates how to properly decode channels based on their labels and how to implement montages. Most existing open source packages to read EDF files do not directly address the problem of channel labels [6]. ● Documentation: We have released two documents that describe our file formats and data representations: (1) electrodes and channels [6]: describes how to map channel labels to physical locations of the electrodes, and includes a description of every channel label appearing in the corpus; (2) annotation standards [7]: describes our annotation file format and how to decode the data structures used to represent the annotations. Additional significant updates to our resources include: ● NEDC TUH EEG Seizure (v1.6.0): This release includes the expansion of the training dataset from 4,597 files to 4,702. Calibration sequences have been manually annotated and added to our existing documentation. Numerous corrections were made to existing annotations based on user feedback. ● IBM TUSZ Pre-Processed Data (v1.0.0): A preprocessed version of the TUH Seizure Detection Corpus using two methods [8], both of which use an FFT sliding window approach (STFT). In the first method, FFT log magnitudes are used. In the second method, the FFT values are normalized across frequency buckets and correlation coefficients are calculated. The eigenvalues are calculated from this correlation matrix. The eigenvalues and correlation matrix's upper triangle are used to generate feature. ● NEDC TUH EEG Artifact Corpus (v1.0.0): This corpus was developed to support modeling of non-seizure signals for problems such as seizure detection. We have been using the data to build better background models. Five artifact events have been labeled: (1) eye movements (EYEM), (2) chewing (CHEW), (3) shivering (SHIV), (4) electrode pop, electrostatic artifacts, and lead artifacts (ELPP), and (5) muscle artifacts (MUSC). The data is cross-referenced to TUH EEG v1.1.0 so you can match patient numbers, sessions, etc. ● NEDC Eval EEG (v1.3.0): In this release of our standardized scoring software, the False Positive Rate (FPR) definition of the Time-Aligned Event Scoring (TAES) metric has been updated [9]. The standard definition is the number of false positives divided by the number of false positives plus the number of true negatives: #FP / (#FP + #TN). We also recently introduced the ability to download our data from an anonymous rsync server. The rsync command [10] effectively synchronizes both a remote directory and a local directory and copies the selected folder from the server to the desktop. It is available as part of most, if not all, Linux and Mac distributions (unfortunately, there is not an acceptable port of this command for Windows). To use the rsync command to download the content from our website, both a username and password are needed. An automated registration process on our website grants both. An example of a typical rsync command to access our data on our website is: rsync -auxv nedc_tuh_eeg@www.isip.piconepress.com:~/data/tuh_eeg/ Rsync is a more robust option for downloading data. We have also experimented with Google Drive and Dropbox, but these types of technology are not suitable for such large amounts of data. All of the resources described in this poster are open source and freely available at https://www.isip.piconepress.com/projects/tuh_eeg/downloads/. We will demonstrate how to access and utilize these resources during the poster presentation and collect community feedback on the most needed additions to enable significant advances in machine learning performance. 
    more » « less
  2. This paper reflects on the significance of ABET’s “maverick evaluators” and what it says about the limits of accreditation as a mode of governance in US engineering education. The US system of engineering education operates as a highly complex system, where the diversity of the system is an asset to robust knowledge production and the production of a varied workforce. ABET Inc., the principal accreditation agency for engineering degree programs in the US, attempts to uphold a set of professional standards for engineering education using a voluntary, peer-based system of evaluation. Key to their approach is a volunteer army of trained program evaluators (PEVs) assigned by the engineering professional societies, who serve as the frontline workers responsible for auditing the content, learning outcomes, and continuous improvement processes utilized by every engineering degree program accredited by ABET. We take a look specifically at those who become labeled “maverick evaluators” in order to better understand how this system functions, and to understand its limitations as a form of governance in maintaining educational quality and appropriate professional standards within engineering education. ABET was established in 1932 as the Engineers’ Council for Professional Development (ECPD). The Cold War consensus around the engineering sciences led to a more quantitative system of accreditation first implemented in 1956. However, the decline of the Cold War and rising concerns about national competitiveness prompted ABET to shift to a more neoliberal model of accountability built around outcomes assessment and modeled after total quality management / continuous process improvement (TQM/CPI) processes that nominally gave PEVs greater discretion in evaluating engineering degree programs. However, conflicts over how the PEVs exercised judgment points to conservative aspects in the structure of the ABET organization, and within the engineering profession at large. This paper and the phenomena we describe here is one part of a broader, interview-based study of higher education governance and engineering educational reform within the United States. We have conducted over 300 interviews at more than 40 different academic institutions and professional organizations, where ABET and institutional responses to the reforms associated with “EC 2000,” which brought outcomes assessment to engineering education, are extensively discussed. The phenomenon of so-called “maverick evaluators” reveal the divergent professional interests that remain embedded within ABET and the engineering profession at large. Those associated with Civil and Environmental Engineering, and to a lesser extent Mechanical Engineering continue to push for higher standards of accreditation grounded in a stronger vision for their professions. While the phenomenon is complex and more subtle than we can summarize in an abstract, “maverick evaluators” emerged as a label for PEVs who interpreted their role, including determinations about whether certain content “appropriate to the field of study,” utilizing professional standards that lay outside of the consensus position held by the majority of the member of the Engineering Accreditation Commission. This, conjoined with the engineers’ epistemic aversion to uncertainty and concerns about the legal liability of their decisions, resulted in a more narrow interpretation of key accreditation criteria. The organization then designed and used a “due-process” reviews process to discipline identified shortcomings in order to limit divergent interpretations. The net result is that the bureaucratic process ABET built to obtain uniformity in accreditation outcomes, simultaneously blunts the organization’s capacity to support varied interpretations of professional standards at the program level. The apparatus has also contributed to ABET’s reputation as an organization focused on minimum standards, as opposed to one that functions as an effective driver for further change in engineering education. 
    more » « less
  3. The Next Generation Science Standards [1] recognized evidence-based argumentation as one of the essential skills for students to develop throughout their science and engineering education. Argumentation focuses students on the need for quality evidence, which helps to develop their deep understanding of content [2]. Argumentation has been studied extensively, both in mathematics and science education but also to some extent in engineering education (see for example [3], [4], [5], [6]). After a thorough search of the literature, we found few studies that have considered how teachers support collective argumentation during engineering learning activities. The purpose of this program of research was to support teachers in viewing argumentation as an important way to promote critical thinking and to provide teachers with tools to implement argumentation in their lessons integrating coding into science, technology, engineering, and mathematics (which we refer to as integrative STEM). We applied a framework developed for secondary mathematics [7] to understand how teachers support collective argumentation in integrative STEM lessons. This framework used Toulmin’s [8] conceptualization of argumentation, which includes three core components of arguments: a claim (or hypothesis) that is based on data (or evidence) accompanied by a warrant (or reasoning) that relates the data to the claim [9], [8]. To adapt the framework, video data were coded using previously established methods for analyzing argumentation [7]. In this paper, we consider how the framework can be applied to an elementary school teacher’s classroom interactions and present examples of how the teacher implements various questioning strategies to facilitate more productive argumentation and deeper student engagement. We aim to understand the nature of the teacher’s support for argumentation—contributions and actions from the teacher that prompt or respond to parts of arguments. In particular, we look at examples of how the teacher supports students to move beyond unstructured tinkering (e.g., trial-and-error) to think logically about coding and develop reasoning for the choices that they make in programming. We also look at the components of arguments that students provide, with and without teacher support. Through the use of the framework, we are able to articulate important aspects of collective argumentation that would otherwise be in the background. The framework gives both eyes to see and language to describe how teachers support collective argumentation in integrative STEM classrooms. 
    more » « less
  4. null (Ed.)
    This workshop will focus on how to teach data collection and analysis to preschoolers. Our project aims to promote preschoolers’ engagement with, and learning of, mathematics and computational thinking (CT) with a set of classroom activities that engage preschoolers in a data collection and analysis (DCA) process. To do this, the project team is engaging in an iterative cycle of development and testing of hands-on, play-based, curricular investigations with feedback from teachers. A key component of the intervention is a teacher-facing digital app (for teachers to use with students on touch-screen tablets) to support the collaboration of preschool teachers and children in collecting data, creating simple graphs, and using the graphs to answer real-world questions. The curricular investigations offer an applied context for using mathematical knowledge (i.e., counting, sorting, classifying, comparing, contrasting) to engage with real-world investigations and lay the foundation for developing flexible problem-solving skills. Each investigation follows a series of instructional tasks that scaffold the problem-solving process and includes (a) nine hands-on and play-based problem-solving investigations where children answer real-world questions by collecting data, creating simple graphs, and interpreting the graphs and. (b) a teacher- facing digital app to support specific data collection and organization steps (i.e., collecting, recording, visualizing). This workshop will describe: (1) the rationale and prior research conducted in this domain, (2) describe an intervention in development focused on data collection and analysis content for preschoolers that develop mathematical (common core standards) and computational thinking skills (K-12 Computational Thinking Framework Standards), (3) demonstrates an app in development that guides teacher and preschoolers through the investigation process and generates graphs to answer questions (NGSS practice standards), (4) report on feedback from a pilot study conducted virtually in preschool classrooms; and (5) describe developmentally appropriate practices for engaging young children in investigations, data collection, and data analysis. 
    more » « less
  5. Abstract

    Camera trapping has revolutionized wildlife ecology and conservation by providing automated data acquisition, leading to the accumulation of massive amounts of camera trap data worldwide. Although management and processing of camera trap‐derived Big Data are becoming increasingly solvable with the help of scalable cyber‐infrastructures, harmonization and exchange of the data remain limited, hindering its full potential. There is currently no widely accepted standard for exchanging camera trap data. The only existing proposal, “Camera Trap Metadata Standard” (CTMS), has several technical shortcomings and limited adoption. We present a new data exchange format, the Camera Trap Data Package (Camtrap DP), designed to allow users to easily exchange, harmonize and archive camera trap data at local to global scales. Camtrap DP structures camera trap data in a simple yet flexible data model consisting of three tables (Deployments, Media and Observations) that supports a wide range of camera deployment designs, classification techniques (e.g., human and AI, media‐based and event‐based) and analytical use cases, from compiling species occurrence data through distribution, occupancy and activity modeling to density estimation. The format further achieves interoperability by building upon existing standards, Frictionless Data Package in particular, which is supported by a suite of open software tools to read and validate data. Camtrap DP is the consensus of a long, in‐depth, consultation and outreach process with standard and software developers, the main existing camera trap data management platforms, major players in the field of camera trapping and the Global Biodiversity Information Facility (GBIF). Under the umbrella of the Biodiversity Information Standards (TDWG), Camtrap DP has been developed openly, collaboratively and with version control from the start. We encourage camera trapping users and developers to join the discussion and contribute to the further development and adoption of this standard.

     
    more » « less