skip to main content

Title: Modeling Unstructured Data: Teachers as Learners and Designers of Technology-enhanced Artificial Intelligence Curriculum. In de Vries, E., Hod, Y., & Ahn, J. (Eds.), (pp. 617-620). Bochum, Germany: International Society of the Learning Sciences.
In this paper, we present a co-design study with teachers to contribute towards development of a technology-enhanced Artificial Intelligence (AI) curriculum, focusing on modeling unstructured data. We created an initial design of a learning activity prototype and explored ways to incorporate the design into high school classes. Specifically, teachers explored text classification models with the prototype and reflected on the exploration as a user, learner, and teacher. They provided insights about learning opportunities in the activity and feedback for integrating it into their teaching. Findings from qualitative analysis demonstrate that exploring text classification models provided an accessible and comprehensive approach for integrated learning of mathematics, language arts, and computing with the potential of supporting the understanding of core AI concepts including identifying structure within unstructured data and reasoning about the roles of human insight in developing AI technologies.
Authors:
; ; ; ; ; ; ;
Editors:
de Vries, E.; Hod, Y.; Ahn, J.
Award ID(s):
1949110
Publication Date:
NSF-PAR ID:
10327961
Journal Name:
Proceedings of the 15th International Conference of the Learning Sciences - ICLS 2021.
Issue:
Jun-2021
Page Range or eLocation-ID:
617 - 620
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we present a co-design study with teachers to contribute towards the development of a technology-enhanced Artificial Intelligence (AI) curriculum, focusing on modeling unstructured data. We created an initial design of a learning activity prototype and explored ways to incorporate the design into high school classes. Specifically, teachers explored text classification models with the prototype and reflected on the exploration as a user, learner, and teacher. They provided insights about learning opportunities in the activity and feedback for integrating it into their teaching. Findings from qualitative analysis demonstrate that exploring text classification models provided an accessible and comprehensive approach for integrated learning of mathematics, language arts, and computing with the potential of supporting the understanding of core AI concepts including identifying structure within unstructured data and reasoning about the roles of human insight in developing AI technologies.
  2. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA.« less
  3. Mobile applications have become widely popular for their ability to access real-time information. In electric vehicle (EV) mobility, these applications are used by drivers to locate charging stations in public spaces, pay for charging transactions, and engage with other users. This activity generates a rich source of data about charging infrastructure and behavior. However, an increasing share of this data is stored as unstructured text—inhibiting our ability to interpret behavior in real-time. In this article, we implement recent transformer-based deep learning algorithms, BERT and XLnet, that have been tailored to automatically classify short user reviews about EV charging experiences. We achieve classification results with a mean accuracy of over 91% and a mean F1 score of over 0.81 allowing for more precise detection of topic categories, even in the presence of highly imbalanced data. Using these classification algorithms as a pre-processing step, we analyze a U.S. national dataset with econometric methods to discover the dominant topics of discourse in charging infrastructure. After adjusting for station characteristics and other factors, we find that the functionality of a charging station is the dominant topic among EV drivers and is more likely to be discussed at points-of-interest with negative user experiences.
  4. Our NSF-funded project, CoBuild19, sought to address the large-scale shift to at-home learning based on nationwide school closures that occurred during COVID-19 through creating making/STEM activities for families with children in grades K-6. Representing multiple organizations, our CoBuild19 project team developed approximately 60 STEM activities that make use of items readily available in most households. From March through June 2020, we produced and shared videos and activity guides, averaging 3+ new activities per week. Initially, the activities consisted of whatever team members could pull together, but we soon created weekly themes with associated activities, including Design and Prototype Week, Textiles Week, Social and Emotional Learning Week, and one week which highlighted kids sharing cooking and baking recipes for other kids. All activities were delivered fully online. To do so, our team started a Facebook group on March 13, 2020. Membership grew to 3490 followers by April 1st, to 4245 by May 1st, and leveled off at approximately 5100 members since June 2020. To date, 22 of our videos have over 1000 views, with the highest garnering 23K views. However, we had very little participation in the form of submitted videos, images, or text from families sharing what they were creating,more »limiting our possible analyses. While we had some initial participation by members, as the FB group grew, substantive evidence of participation faded. To better understand this drop, we polled FB group members about their use of the activities. Responses (n = 101) were dominated by the option, "We are glad to know the ideas are available, but we are not using much" (49%), followed by, "We occasionally do activities" (35%). At this point, we had no data about home participation, so we decided to experiment with different approaches. Our next efforts focused on conducting virtual maker/STEM camps. Leveraging the content produced in the first months of CoBuild19, we hosted two rounds of Camp CoBuild by the end of July, serving close to 100 campers. The camps generated richer data in the form of recorded Zoom camp sessions where campers made synchronously with educators and youth-created Flipgrid videos where campers shared their process and products for each activity. We also collected post-camp surveys and some caregiver interviews. Preliminary analyses have focused on the range of participant engagement and which malleable factors may be associated with deeper engagement. Initial feedback from caregivers indicated that their children gained confidence to experiment with simple materials through engaging in these activities. This project sought to fill what we perceived as a developing need in the community at a large scale (e.g., across the US). Although we have not achieved the level of success we expected, the project achieved quick growth that took us in a different direction than we originally intended. Overall, we created content that educators and families can use to engage kids with minimal materials. Additionally, we have a few models of extended engagement (e.g., Camp CoBuild) that we can develop further into future offerings.« less
  5. Agrawal, Garima (Ed.)
    Cybersecurity education is exceptionally challenging as it involves learning the complex attacks; tools and developing critical problem-solving skills to defend the systems. For a student or novice researcher in the cybersecurity domain, there is a need to design an adaptive learning strategy that can break complex tasks and concepts into simple representations. An AI-enabled automated cybersecurity education system can improve cognitive engagement and active learning. Knowledge graphs (KG) provide a visual representation in a graph that can reason and interpret from the underlying data, making them suitable for use in education and interactive learning. However, there are no publicly available datasets for the cybersecurity education domain to build such systems. The data is present as unstructured educational course material, Wiki pages, capture the flag (CTF) writeups, etc. Creating knowledge graphs from unstructured text is challenging without an ontology or annotated dataset. However, data annotation for cybersecurity needs domain experts. To address these gaps, we made three contributions in this paper. First, we propose an ontology for the cybersecurity education domain for students and novice learners. Second, we develop AISecKG, a triple dataset with cybersecurity-related entities and relations as defined by the ontology. This dataset can be used to construct knowledgemore »graphs to teach cybersecurity and promote cognitive learning. It can also be used to build downstream applications like recommendation systems or self-learning question-answering systems for students. The dataset would also help identify malicious named entities and their probable impact. Third, using this dataset, we show a downstream application to extract custom-named entities from texts and educational material on cybersecurity.« less