- NSF-PAR ID:
- 10165859
- Date Published:
- Journal Name:
- Case studies in the environment
- ISSN:
- 2473-9510
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Composable infrastructure holds the promise of accelerating the pace of academic research and discovery by enabling researchers to tailor the resources of a machine (e.g., GPUs, storage, NICs), on-demand, to address application needs. We were first introduced to composable infrastructure in 2018, and at the same time, there was growing demand among our College of Engineering faculty for GPU systems for data science, artificial intelligence / machine learning / deep learning, and visualization. Many purchased their own individual desktop or deskside systems, a few pursued more costly cloud and HPC solutions, and others looked to the College or campus computer center for GPU resources which, at the time, were scarce. After surveying the diverse needs of our faculty and studying product offerings by a few nascent startups in the composable infrastructure sector, we applied for and received a grant from the National Science Foundation in November 2019 to purchase a mid-scale system, configured to our specifications, for use by faculty and students for research and research training. This paper describes our composable infrastructure solution and implementation for our academic community. Given how modern workflows are progressively moving to containers and cloud frameworks (using Kubernetes) and to programming notebooks (primarily Jupyter), both for ease of use and for ensuring reproducible experiments, we initially adapted these tools for our system. We have since made it simpler to use our system, and now provide our users with a public facing JupyterHub server. We also added an expansion chassis to our system to enable composable co-location, which is a shared central architecture in which our researchers can insert and integrate specialized resources (GPUs, accelerators, networking cards, etc.) needed for their research. In February 2020, installation of our system was finalized and made operational and we began providing access to faculty in the College of Engineering. Now, two years later, it is used by over 40 faculty and students plus some external collaborators for research and research training. Their use cases and experiences are briefly described in this paper. Composable infrastructure has proven to be a useful computational system for workload variability, uneven applications, and modern workflows in academic environments.more » « less
-
Abstract To date, many AI initiatives (eg, AI4K12, CS for All) developed standards and frameworks as guidance for educators to create accessible and engaging Artificial Intelligence (AI) learning experiences for K‐12 students. These efforts revealed a significant need to prepare youth to gain a fundamental understanding of how intelligence is created, applied, and its potential to perpetuate bias and unfairness. This study contributes to the growing interest in K‐12 AI education by examining student learning of modelling real‐world text data. Four students from an Advanced Placement computer science classroom at a public high school participated in this study. Our qualitative analysis reveals that the students developed nuanced and in‐depth understandings of how text classification models—a type of AI application—are trained. Specifically, we found that in modelling texts, students: (1) drew on their social experiences and cultural knowledge to create predictive features, (2) engineered predictive features to address model errors, (3) described model learning patterns from training data and (4) reasoned about noisy features when comparing models. This study contributes to an initial understanding of student learning of modelling unstructured data and offers implications for scaffolding in‐depth reasoning about model decision making.
Practitioner notes What is already known about this topic
Scholarly attention has turned to examining Artificial Intelligence (AI) literacy in K‐12 to help students understand the working mechanism of AI technologies and critically evaluate automated decisions made by computer models.
While efforts have been made to engage students in understanding AI through building machine learning models with data, few of them go in‐depth into teaching and learning of feature engineering, a critical concept in modelling data.
There is a need for research to examine students' data modelling processes, particularly in the little‐researched realm of unstructured data.
What this paper adds
Results show that students developed nuanced understandings of models learning patterns in data for automated decision making.
Results demonstrate that students drew on prior experience and knowledge in creating features from unstructured data in the learning task of building text classification models.
Students needed support in performing feature engineering practices, reasoning about noisy features and exploring features in rich social contexts that the data set is situated in.
Implications for practice and/or policy
It is important for schools to provide hands‐on model building experiences for students to understand and evaluate automated decisions from AI technologies.
Students should be empowered to draw on their cultural and social backgrounds as they create models and evaluate data sources.
To extend this work, educators should consider opportunities to integrate AI learning in other disciplinary subjects (ie, outside of computer science classes).
-
Abstract In this study, support for teaching data literacy in social studies is provided through the design of a pedagogical support system informed by participatory design sessions with both pre‐service and in‐service social studies teachers. It provides instruction on teaching and learning data literacy in social studies, examples of standards‐based lesson plans, made‐to‐purpose data visualization tools and minimal manuals that put existing online tools in a social studies context. Based on case studies of eleven practicing teachers, this study provides insight into features of technology resources that social studies teachers find usable and useful for using data visualizations as part of standards‐ and inquiry‐based social studies instruction, teaching critical analysis of data visualizations and helping students create data visualizations with online computing tools. The final result, though, is that few of our participating teachers have yet adopted the provided resources into their own classrooms, which highlights weaknesses of the technology acceptance model for describing teacher adoption.
Practitioner notes What is already known about this topic
Data literacy is an important part of social studies education in the United States.
Most teachers do not teach data literacy as a part of social studies.
Teachers may adopt technology to help them teach data literacy if they think it is useful and usable.
What this paper adds
Educational technology can help teachers learn about data literacy in social studies.
Social studies teachers want simple tools that fit with their existing curricula, give them new project ideas and help students learn difficult concepts.
Making tools useful and usable does not predict adoption; context plays a large role in a social studies teachers' adoption.
Implications for practice and/or policy
Designing purpose‐built tools for social studies teachers will encourage them to teach data literacy in their classes.
Professional learning opportunities for teachers around data literacy should include opportunities for experimentation with tools.
Teachers are not likely to use tools if they are not accompanied by lesson and project ideas.
-
Abstract The use of two‐dimensional images to teach students about three‐dimensional molecules continues to be a prevalent issue in many classrooms. As affordable visualization technologies continue to advance, there has been an increasing interest to utilize novel technology, such as augmented reality (AR), in the development of molecular visualization tools. Existing evaluations of these visual–spatial learning tools focus primarily on student performance and attitude, with little attention toward potential inequity in student participation. Our study adds to the current literature on introducing molecular visualization technology in biochemistry classrooms by examining the potential inequity in a group activity mediated by AR technology. Adapting the participatory equity framework to our specific context, we view equity and inequity in terms of access to the technological conversational floor, a social space created when people enter technology‐mediated joint endeavors. We explore three questions: What are the different ways students interact with an AR model of the potassium channel? What are salient patterns of participation that may signify inequity in classroom technology use? What is the interplay between group social dynamics and the introduction of AR technology in the context of a technology‐mediated group activity? Pairing qualitative analysis with quantitative metrics, our mixed‐methods approach produced a complex story of student participation in an AR‐mediated group activity. The patterns of student participation showed that equity and inequity in an AR‐mediated biochemistry group learning activity are fluid and multifaceted. It was observed that students who gave more explanations during group discussion also had more interactions with the AR model (i.e., they had greater access to the technological conversational floor), and their opinion of the AR model may have greater influence on how their group engage with the AR model. This study provides more nuanced ways of conceptualizing equity and inequity in biochemistry learning settings.
-
Machine Learning Made Easy (MLme): a comprehensive toolkit for machine learning–driven data analysis
Abstract Background Machine learning (ML) has emerged as a vital asset for researchers to analyze and extract valuable information from complex datasets. However, developing an effective and robust ML pipeline can present a real challenge, demanding considerable time and effort, thereby impeding research progress. Existing tools in this landscape require a profound understanding of ML principles and programming skills. Furthermore, users are required to engage in the comprehensive configuration of their ML pipeline to obtain optimal performance.
Results To address these challenges, we have developed a novel tool called Machine Learning Made Easy (MLme) that streamlines the use of ML in research, specifically focusing on classification problems at present. By integrating 4 essential functionalities—namely, Data Exploration, AutoML, CustomML, and Visualization—MLme fulfills the diverse requirements of researchers while eliminating the need for extensive coding efforts. To demonstrate the applicability of MLme, we conducted rigorous testing on 6 distinct datasets, each presenting unique characteristics and challenges. Our results consistently showed promising performance across different datasets, reaffirming the versatility and effectiveness of the tool. Additionally, by utilizing MLme’s feature selection functionality, we successfully identified significant markers for CD8+ naive (BACH2), CD16+ (CD16), and CD14+ (VCAN) cell populations.
Conclusion MLme serves as a valuable resource for leveraging ML to facilitate insightful data analysis and enhance research outcomes, while alleviating concerns related to complex coding scripts. The source code and a detailed tutorial for MLme are available at https://github.com/FunctionalUrology/MLme.