skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Technology Laboratories: Facilitating Instruction for Cyberinfrastructure Infused Data Sciences
While artificial intelligence and machine learning (AI/ML) frameworks gain prominence in science and engineering, most researchers face significant challenges in adopting complex AI/ML workflows to campus and national cyberinfrastructure (CI) environments. Data from the Texas A&M High Performance Computing (HPRC) researcher training program indicate that researchers increasingly want to learn how to migrate and work with their pre-existing AI/ML frameworks on large scale computing environments. Building on the continuing success of our work in developing innovative pedagogical approaches for CI- training approaches, we expand CI-infused pedagogical approaches to teach technology-based AI and data sciences. We revisit the pedagogical approaches used in the decades-old tradition of laboratories in the Physical Sciences that taught concepts via experiential learning. Here, we structure a series of exercises on interactive computing environments that give researchers immediate hands-on experience in AI/ML and data science technologies that they will use as they work on larger CI resources. These exercises, called “tech-labs,” assume that participating researchers are familiar with AI/ML approaches and focus on hands-on exercises that teach researchers how to use these approaches on large-scale CI. The tech-labs offer four consecutive sessions, each introducing a learner to specific technologies offered in CI environments for AI/ML and data workflows. We report on our tech-lab offered for Python-based AI/ML approaches during which learners are introduced to Jupyter Notebooks followed by exercises using Pandas, Matplotlib, Scikit-learn, and Keras. The program includes a series of enhancements such as container support and easy launch of virtual environments in our Web-based computing interface. The approach is scalable to programs using a command line interface (CLI) as well. In all, the program offers a shift in focus from teaching AI/ML toward increasing adoption of AI/ML in large-scale CI.  more » « less
Award ID(s):
1730695 1925764
PAR ID:
10553939
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Journal of Computational Science Education
Date Published:
Journal Name:
Journal of computational science education
Volume:
13
Issue:
1
ISSN:
2153-4136
Page Range / eLocation ID:
44-49
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Many of our generation’s most pressing environmental science problems are wicked problems, which means they cannot be cleanly isolated and solved with a single ‘correct’ answer (e.g., Rittel 1973; Wirz 2021). The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) seeks to address such problems by developing synergistic approaches with a team of scientists from three disciplines: environmental science (including atmospheric, ocean, and other physical sciences), AI, and social science including risk communication. As part of our work, we developed a novel approach to summer school, held from June 27-30, 2022. The goal of this summer school was to teach a new generation of environmental scientists how to cross disciplines and develop approaches that integrate all three disciplinary perspectives and approaches in order to solve environmental science problems. In addition to a lecture series that focused on the synthesis of AI, environmental science, and risk communication, this year’s summer school included a unique Trust-a-thon component where participants gained hands-on experience applying both risk communication and explainable AI techniques to pre-trained ML models. We had 677 participants from 63 countries register and attend online. Lecture topics included trust and trustworthiness (Day 1), explainability and interpretability (Day 2), data and workflows (Day 3), and uncertainty quantification (Day 4). For the Trust-a-thon we developed challenge problems for three different application domains: (1) severe storms, (2) tropical cyclones, and (3) space weather. Each domain had associated user persona to guide user-centered development. 
    more » « less
  2. This work examines the design of computer science informal learning programs in two public libraries offered through a university-library partnership. Specifically, the work focuses on dilemmas encountered by program facilitators when designing informal environments that focus on engaging culturally diverse youth with CT concepts. We analyzed over 40 reflection journals from program facilitators, illustrating content selection, pedagogical decisions, and the application of culturally relevant frameworks related to the design of the computing environment. Findings of this study provided insights related to the design, implementation and outcomes of informal computing programs for youth from diverse backgrounds. 
    more » « less
  3. Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students. 
    more » « less
  4. Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students. 
    more » « less
  5. Recent advances in Artificial Intelligence (AI) have brought society closer to the long-held dream of creating machines to help with both common and complex tasks and functions. From recommending movies to detecting disease in its earliest stages, AI has become an aspect of daily life many people accept without scrutiny. Despite its functionality and promise, AI has inherent security risks that users should understand and programmers must be trained to address. The ICE (integrity, confidentiality, and equity) cybersecurity labs developed by a team of cybersecurity researchers addresses these vulnerabilities to AI models through a series of hands-on, inquiry-based labs. Through experimenting with and manipulating data models, students can experience firsthand how adversarial samples and bias can degrade the integrity, confidentiality, and equity of deep learning neural networks, as well as implement security measures to mitigate these vulnerabilities. This article addresses the pedagogical approach underpinning the ICE labs, and discusses both sample activities and technological considerations for teachers who want to implement these labs with their students. 
    more » « less