The EngageAI Institute focuses on AI‐driven narrative‐centered learning environments that create engaging story‐based problem‐solving experiences to support collaborative learning. The institute's research has three complementary strands. First, the institute creates narrative‐centered learning environments that generate interactive story‐based problem scenarios to elicit rich communication, encourage coordination, and spark collaborative creativity. Second, the institute creates virtual embodied conversational agent technologies with multiple modalities for communication (speech, facial expression, gesture, gaze, and posture) to support student learning. Embodied conversational agents are driven by advances in natural language understanding, natural language generation, and computer vision. Third, the institute is creating an innovative multimodal learning analytics framework that analyzes parallel streams of multimodal data derived from students’ conversations, gaze, facial expressions, gesture, and posture as they interact with each other, with teachers, and with embodied conversational agents. Woven throughout the institute's activities is a strong focus on ethics, with an emphasis on creating AI‐augmented learning that is deeply informed by considerations of fairness, accountability, transparency, trust, and privacy. The institute emphasizes broad participation and diverse perspectives to ensure that advances in AI‐augmented learning address inequities in STEM. The institute brings together a multistate network of universities, diverse K‐12 school systems, science museums, and nonprofit partners. Key to all of these endeavors is an emphasis on diversity, equity, and inclusion.
This content will become publicly available on March 1, 2025
The Institute for Student‐AI Teaming (iSAT) addresses the foundational question:
- Award ID(s):
- 2019805
- NSF-PAR ID:
- 10499524
- Author(s) / Creator(s):
- ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
- Publisher / Repository:
- AI Magazine
- Date Published:
- Journal Name:
- AI Magazine
- Volume:
- 45
- Issue:
- 1
- ISSN:
- 0738-4602
- Page Range / eLocation ID:
- 61 to 68
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Artificial intelligence (AI) can enhance teachers' capabilities by sharing control over different parts of learning activities. This is especially true for complex learning activities, such as dynamic learning transitions where students move between individual and collaborative learning in un‐planned ways, as the need arises. Yet, few initiatives have emerged considering how shared responsibility between teachers and AI can support learning and how teachers' voices might be included to inform design decisions. The goal of our article is twofold. First, we describe a secondary analysis of our co‐design process comprising six design methods to understand how teachers conceptualise sharing control with an AI co‐orchestration tool, called
Pair‐Up . We worked with 76 middle school math teachers, each taking part in one to three methods, to create a co‐orchestration tool that supports dynamic combinations of individual and collaborative learning using two AI‐based tutoring systems. We leveraged qualitative content analysis to examine teachers' views about sharing control withPair‐Up , and we describe high‐level insights about the human‐AI interaction, including control, trust, responsibility, efficiency, and accuracy. Secondly, we use our results as an example showcasing how human‐centred learning analytics can be applied to the design of human‐AI technologies and share reflections for human‐AI technology designers regarding the methods that might be fruitful to elicit teacher feedback and ideas. Our findings illustrate the design of a novel co‐orchestration tool to facilitate the transitions between individual and collaborative learning and highlight considerations and reflections for designers of similar systems.Practitioner notes What is already known about this topic:
Artificial Intelligence (AI) can help teachers facilitate complex classroom activities, such as having students move between individual and collaborative learning in unplanned ways.
Designers should use human‐centred design approaches to give teachers a voice in deciding what AI might do in the classroom and if or how they want to share control with it.
What this paper adds:
Presents teacher views about how they want to share control with AI to support students moving between individual and collaborative learning.
Describes how we adapted six design methods to design AI features.
Illustrates a complete, iterative process to create human‐AI interactions to support teachers as they facilitate students moving from individual to collaborative learning.
Implications for practice:
We share five implications for designers that teachers highlighted as necessary when designing AI‐features, including control, trust, responsibility, efficiency and accuracy.
Our work also includes a reflection on our design process and implications for future design processes.
-
Abstract The National Science Foundation (NSF) Artificial Intelligence (AI) Institute for Edge Computing Leveraging Next Generation Networks (Athena) seeks to foment a transformation in modern edge computing by advancing AI foundations, computing paradigms, networked computing systems, and edge services and applications from a completely new computing perspective. Led by Duke University, Athena leverages revolutionary developments in computer systems, machine learning, networked computing systems, cyber‐physical systems, and sensing. Members of Athena form a multidisciplinary team from eight universities. Athena organizes its research activities under four interrelated thrusts supporting edge computing: Foundational AI, Computer Systems, Networked Computing Systems, and Services and Applications, which constitute an ambitious and comprehensive research agenda. The research tasks of Athena will focus on developing AI‐driven next‐generation technologies for edge computing and new algorithmic and practical foundations of AI and evaluating the research outcomes through a combination of analytical, experimental, and empirical instruments, especially with target use‐inspired research. The researchers of Athena demonstrate a cohesive effort by synergistically integrating the research outcomes from the four thrusts into three pillars: Edge Computing AI Systems, Collaborative Extended Reality (XR), and Situational Awareness and Autonomy. Athena is committed to a robust and comprehensive suite of educational and workforce development endeavors alongside its domestic and international collaboration and knowledge transfer efforts with external stakeholders that include both industry and community partnerships.
-
Abstract In response to Li, Reigh, He, and Miller's commentary,
Can we and should we use artificial intelligence for formative assessment in science , we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment. -
Recent years have seen growing recognition of the importance of enabling K-12 students to learn computer science. Meanwhile, artificial intelligence, a field of computer science, has with the potential to profoundly reshape society. This has generated increasing demand for fostering an AI-literate populace. However, there is little work exploring how to introduce K-12 students to AI and how to support K-12 teachers in integrating AI into their classrooms. In this work, we explore how to introduce AI learning experiences into upper elementary classrooms (student ages 8 to 11). With a focus on integrating AI and life science, we present initial work on a collaborative game-based learning environment that features rich problem-based learning scenarios that enable students to gain experience with AI applied toward solving real-world life-science problems.more » « less