Title: The Dynamics Concept Inventory (DCI) – The Past, Present, and Future
The Dynamics Concept Inventory (DCI) was developed over 15 years ago as a tool for
instructors teaching Dynamics to assess their students’ gains in conceptual understanding of
the material. Since its initial release, there have been hundreds of downloads of the
instrument, and the initial papers presenting the instrument have been referenced over 100
times. In this paper, we will 1) present a brief history of the development of the DCI, 2)
evaluate the ways it has been used since its release with the hope of encouraging more
engineering faculty members to use it, 3) summarize results from those who have used it, and 4) present plans for future development and distribution. more »« less
Integrated approaches to teaching science, technology, engineering, and mathematics (commonly referred to as STEM education) in K-12 classrooms have resulted in a growing number of teachers incorporating engineering in their science classrooms. Such changes are a result of shifts in science standards to include engineering as evidenced by the Next Generation Science Standards. To date, 20 states and the District of Columbia have adopted the NGSS and another 24 have adopted standards based on the Framework for K-12 Science Education. Despite the increased presence of engineering and integrated STEM education in K-12 education, there are several concerns to consider. One concern is the limited availability of observation instruments appropriate for instruction where multiple STEM disciplines are present and integrated with one another. Addressing this concern requires the development of a new observation instrument, designed with integrated STEM instruction in mind. An instrument such as this has implications for both research and practice. For example, research using this instrument could help educators compare integrated STEM instruction across grade bands. Additionally, this tool could be useful in the preparation of pre-service teachers and professional development of in-service teachers new to integrated STEM education and formative learning through professional learning communities or classroom coaching.
The work presented here describes in detail the development of an integrated STEM observation instrument that can be used for both research and practice. Over a period of approximately 18-months, a team of STEM educators and educational researchers developed a 10-item integrated STEM observation instrument for use in K-12 science and engineering classrooms. The process of developing the instrument began with establishing a conceptual framework, drawing on the integrated STEM research literature, national standards documents, and frameworks for both K-12 engineering education and integrated STEM education.
As part of the instrument development process, the project team had access to over 2000 classroom videos where integrated STEM education took place. Initial analysis of a selection of these videos helped the project team write a preliminary draft instrument consisting of 52 items. Through several rounds of revisions, including the construction of detailed scoring levels of the items and collapsing of items that significantly overlapped, and piloting of the instrument for usability, items were added, edited, and/or removed for various reasons. These reasons included issues concerning the intricacy of the observed phenomenon or the item not being specific to integrated STEM education (e.g., questioning). In its final form, the instrument consists of 10 items, each comprising four descriptive levels. Each item is also accompanied by a set of user guidelines, which have been refined by the project team as a result of piloting the instrument and reviewed by external experts in the field. The instrument has shown to be reliable with the project team and further validation is underway. This instrument will be of use to a wide variety of educators and educational researchers looking to understand the implementation of integrated STEM education in K-12 science and engineering classrooms.
Dare, E. A.; Hiwatig, B.; Keratithamkul, K.; Ellis, J. A.; Roehrig, G. H.; Ring-Whalen, E. A.; Rouleau, M. D.; Faruqi, F.; Rice, C.; Titu, P.; et al(
, ASEE Annual Conference proceedings)
null
(Ed.)
Integrated approaches to teaching science, technology, engineering, and mathematics (commonly referred to as STEM education) in K-12 classrooms have resulted in a growing number of teachers incorporating engineering in their science classrooms. Such changes are a result of shifts in science standards to include engineering as evidenced by the Next Generation Science Standards. To date, 20 states and the District of Columbia have adopted the NGSS and another 24 have adopted standards based on the Framework for K-12 Science Education. Despite the increased presence of engineering and integrated STEM education in K-12 education, there are several concerns to consider. One concern is the limited availability of observation instruments appropriate for instruction where multiple STEM disciplines are present and integrated with one another. Addressing this concern requires the development of a new observation instrument, designed with integrated STEM instruction in mind. An instrument such as this has implications for both research and practice. For example, research using this instrument could help educators compare integrated STEM instruction across grade bands. Additionally, this tool could be useful in the preparation of pre-service teachers and professional development of in-service teachers new to integrated STEM education and formative learning through professional learning communities or classroom coaching.
The work presented here describes in detail the development of an integrated STEM observation instrument - the STEM Observation Protocol (STEM-OP) - that can be used for both research and practice. Over a period of approximately 18-months, a team of STEM educators and educational researchers developed a 10-item integrated STEM observation instrument for use in K-12 science and engineering classrooms. The process of developing the STEM-OP began with establishing a conceptual framework, drawing on the integrated STEM research literature, national standards documents, and frameworks for both K-12 engineering education and integrated STEM education.
As part of the instrument development process, the project team had access to over 2000 classroom videos where integrated STEM education took place. Initial analysis of a selection of these videos helped the project team write a preliminary draft instrument consisting of 79 items. Through several rounds of revisions, including the construction of detailed scoring levels of the items and collapsing of items that significantly overlapped, and piloting of the instrument for usability, items were added, edited, and/or removed for various reasons. These reasons included issues concerning the intricacy of the observed phenomenon or the item not being specific to integrated STEM education (e.g., questioning). In its final form, the STEM-OP consists of 10 items, each comprising four descriptive levels. Each item is also accompanied by a set of user guidelines, which have been refined by the project team as a result of piloting the instrument and reviewed by external experts in the field. The instrument has shown to be reliable with the project team and further validation is underway. The STEM-OP will be of use to a wide variety of educators and educational researchers looking to understand the implementation of integrated STEM education in K-12 science and engineering classrooms.
Buckwalter, Grace
Chhin(
, Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB))
Obeid, Iyad
Selesnick
(Ed.)
The Temple University Hospital EEG Corpus (TUEG) [1] is the largest publicly available EEG corpus of its type and currently has over 5,000 subscribers (we currently average 35 new subscribers a week). Several valuable subsets of this corpus have been developed including the Temple University Hospital EEG Seizure Corpus (TUSZ) [2] and the Temple University Hospital EEG Artifact Corpus (TUAR) [3]. TUSZ contains manually annotated seizure events and has been widely used to develop seizure detection and prediction technology [4]. TUAR contains manually annotated artifacts and has been used to improve machine learning performance on seizure detection tasks [5]. In this poster, we will discuss recent improvements made to both corpora that are creating opportunities to improve machine learning performance.
Two major concerns that were raised when v1.5.2 of TUSZ was released for the Neureka 2020 Epilepsy Challenge were: (1) the subjects contained in the training, development (validation) and blind evaluation sets were not mutually exclusive, and (2) high frequency seizures were not accurately annotated in all files. Regarding (1), there were 50 subjects in dev, 50 subjects in eval, and 592 subjects in train. There was one subject common to dev and eval, five subjects common to dev and train, and 13 subjects common between eval and train. Though this does not substantially influence performance for the current generation of technology, it could be a problem down the line as technology improves. Therefore, we have rebuilt the partitions of the data so that this overlap was removed. This required augmenting the evaluation and development data sets with new subjects that had not been previously annotated so that the size of these subsets remained approximately the same. Since these annotations were done by a new group of annotators, special care was taken to make sure the new annotators followed the same practices as the previous generations of annotators. Part of our quality control process was to have the new annotators review all previous annotations. This rigorous training coupled with a strict quality control process where annotators review a significant amount of each other’s work ensured that there is high interrater agreement between the two groups (kappa statistic greater than 0.8) [6].
In the process of reviewing this data, we also decided to split long files into a series of smaller segments to facilitate processing of the data. Some subscribers found it difficult to process long files using Python code, which tends to be very memory intensive. We also found it inefficient to manipulate these long files in our annotation tool. In this release, the maximum duration of any single file is limited to 60 mins. This increased the number of edf files in the dev set from 1012 to 1832.
Regarding (2), as part of discussions of several issues raised by a few subscribers, we discovered some files only had low frequency epileptiform events annotated (defined as events that ranged in frequency from 2.5 Hz to 3 Hz), while others had events annotated that contained significant frequency content above 3 Hz. Though there were not many files that had this type of activity, it was enough of a concern to necessitate reviewing the entire corpus. An example of an epileptiform seizure event with frequency content higher than 3 Hz is shown in Figure 1. Annotating these additional events slightly increased the number of seizure events. In v1.5.2, there were 673 seizures, while in v1.5.3 there are 1239 events.
One of the fertile areas for technology improvements is artifact reduction. Artifacts and slowing constitute the two major error modalities in seizure detection [3]. This was a major reason we developed TUAR. It can be used to evaluate artifact detection and suppression technology as well as multimodal background models that explicitly model artifacts. An issue with TUAR was the practicality of the annotation tags used when there are multiple simultaneous events. An example of such an event is shown in Figure 2. In this section of the file, there is an overlap of eye movement, electrode artifact, and muscle artifact events. We previously annotated such events using a convention that included annotating background along with any artifact that is present. The artifacts present would either be annotated with a single tag (e.g., MUSC) or a coupled artifact tag (e.g., MUSC+ELEC). When multiple channels have background, the tags become crowded and difficult to identify. This is one reason we now support a hierarchical annotation format using XML – annotations can be arbitrarily complex and support overlaps in time.
Our annotators also reviewed specific eye movement artifacts (e.g., eye flutter, eyeblinks). Eye movements are often mistaken as seizures due to their similar morphology [7][8]. We have improved our understanding of ocular events and it has allowed us to annotate artifacts in the corpus more carefully.
In this poster, we will present statistics on the newest releases of these corpora and discuss the impact these improvements have had on machine learning research. We will compare TUSZ v1.5.3 and TUAR v2.0.0 with previous versions of these corpora. We will release v1.5.3 of TUSZ and v2.0.0 of TUAR in Fall 2021 prior to the symposium.
ACKNOWLEDGMENTS
Research reported in this publication was most recently supported by the National Science Foundation’s Industrial Innovation and Partnerships (IIP) Research Experience for Undergraduates award number 1827565. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations.
REFERENCES
[1] I. Obeid and J. Picone, “The Temple University Hospital EEG Data Corpus,” in Augmentation of Brain Function: Facts, Fiction and Controversy. Volume I: Brain-Machine Interfaces, 1st ed., vol. 10, M. A. Lebedev, Ed. Lausanne, Switzerland: Frontiers Media S.A., 2016, pp. 394 398. https://doi.org/10.3389/fnins.2016.00196.
[2] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Frontiers in Neuroinformatics, vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083.
[3] A. Hamid et, al., “The Temple University Artifact Corpus: An Annotated Corpus of EEG Artifacts.” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2020, pp. 1-3. https://ieeexplore.ieee.org/document/9353647.
[4] Y. Roy, R. Iskander, and J. Picone, “The NeurekaTM 2020 Epilepsy Challenge,” NeuroTechX, 2020. [Online]. Available: https://neureka-challenge.com/. [Accessed: 01-Dec-2021].
[5] S. Rahman, A. Hamid, D. Ochal, I. Obeid, and J. Picone, “Improving the Quality of the TUSZ Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2020, pp. 1–5. https://ieeexplore.ieee.org/document/9353635.
[6] V. Shah, E. von Weltin, T. Ahsan, I. Obeid, and J. Picone, “On the Use of Non-Experts for Generation of High-Quality Annotations of Seizure Events,” Available: https://www.isip.picone press.com/publications/unpublished/journals/2019/elsevier_cn/ira. [Accessed: 01-Dec-2021].
[7] D. Ochal, S. Rahman, S. Ferrell, T. Elseify, I. Obeid, and J. Picone, “The Temple University Hospital EEG Corpus: Annotation Guidelines,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/tuh_eeg/annotations/.
[8] D. Strayhorn, “The Atlas of Adult Electroencephalography,” EEG Atlas Online, 2014. [Online]. Availabl
Hunt, I.; Husain, S.; Simon, J.; Obeid, I.; Picone, J.(
, IEEE Signal Processing in Medicine and Biology Symposium (SPMB))
Obeid, Iyad; Picone, Joseph; Selesnick, Ivan
(Ed.)
The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a National Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public.
The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below:
Filenames:
tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs
tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx
Explanation:
tudp: root directory of the corpus
v1.0.0: version number of the release
svs: the image data type
gastro: the type of tissue
000001: six-digit sequence number used to control directory complexity
00123456: 8-digit patient MRN
2015_03_05: the date the specimen was captured
0s15_12345: the clinical case name
0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000)
0s15_12345_00123456.docx: the filename for the corresponding case report
We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference.
Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow.
The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks.
To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information.
Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc.
Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath.
Rahman, Safwanur
Hamid(
, IEEE Signal Processing in Medicine and Biology Symposium (SPMB))
Obeid, Iyad; Selesnick, Ivan; Picone, Joseph
(Ed.)
The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1.
The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data.
A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure.
After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file.
Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate.
Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work.
To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5].
The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed.
The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions.
The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020.
The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/.
Cornwell, P., and Self, B. P. The Dynamics Concept Inventory (DCI) – The Past, Present, and Future. Retrieved from https://par.nsf.gov/biblio/10172021. ASEE Annual Conference proceedings . Web. doi:10.18260/1-2--35304.
Cornwell, P., & Self, B. P. The Dynamics Concept Inventory (DCI) – The Past, Present, and Future. ASEE Annual Conference proceedings, (). Retrieved from https://par.nsf.gov/biblio/10172021. https://doi.org/10.18260/1-2--35304
@article{osti_10172021,
place = {Country unknown/Code not available},
title = {The Dynamics Concept Inventory (DCI) – The Past, Present, and Future},
url = {https://par.nsf.gov/biblio/10172021},
DOI = {10.18260/1-2--35304},
abstractNote = {The Dynamics Concept Inventory (DCI) was developed over 15 years ago as a tool for instructors teaching Dynamics to assess their students’ gains in conceptual understanding of the material. Since its initial release, there have been hundreds of downloads of the instrument, and the initial papers presenting the instrument have been referenced over 100 times. In this paper, we will 1) present a brief history of the development of the DCI, 2) evaluate the ways it has been used since its release with the hope of encouraging more engineering faculty members to use it, 3) summarize results from those who have used it, and 4) present plans for future development and distribution.},
journal = {ASEE Annual Conference proceedings},
author = {Cornwell, P. and Self, B. P.},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.