Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
PurposeChallenges in teaching the engineering design process (EDP) at the high-school level, such as promoting good documentation practices, are well-documented. While developments in educational artificial intelligence (AI) systems have the potential to assist in addressing these challenges, the open-ended nature of the EDP leads to challenges that often lack the specificity required for actionable AI development. In addition, conventional educational AI systems (e.g. intelligent tutoring systems) primarily target procedural domain tasks with well-defined outcomes and problem-solving strategies, while the EDP involves open-ended problems and multiple correct solutions, making AI intervention timing and appropriateness complex. Design/methodology/approachAuthors conducted a six-week-long Research through Co-Design (RtCD) process (i.e. a co-design process rooted in Research through Design) with two experienced high-school engineering teachers to co-construct actionable insight in the form of AI intervention points (AI-IPs) in engineering education where an AI system can effectively intervene to support them while highlighting their pedagogical practices. FindingsThis paper leveraged the design of task models to iteratively refine our prior understanding of teachers’ experiences with teaching the EDP into three AI-IPs related to documentation, ephemeral interactions between teachers and students and disruptive failures that can serve as a focus for intelligent educational system designs. Originality/valueThis paper discusses the implications of these AI-IPs for designing educational AI systems to support engineering education as well as the importance of leveraging RtCD methodologies to engage teachers in developing intelligent educational systems that align with their needs and afford them control over computational interventions in their classrooms.more » « lessFree, publicly-accessible full text available September 19, 2026
-
null (Ed.)Capturing analytic provenance is important for refining sensemaking analysis. However, understanding this provenance can be difficult. First, making sense of the reasoning in intermediate steps is time-consuming. Especially in distributed sensemaking, the provenance is less cohesive because each analyst only sees a small portion of the data without an understanding of the overall collaboration workflow. Second, analysis errors from one step can propagate to later steps. Furthermore, in exploratory sensemaking, it is difficult to define what an error is since there are no correct answers to reference. In this paper, we explore provenance analysis for distributed sensemaking in the context of crowdsourcing, where distributed analysis contributions are captured in microtasks. We propose crowd auditing as a way to help individual analysts visualize and trace provenance to debug distributed sensemaking. To evaluate this concept, we implemented a crowd auditing tool, CrowdTrace. Our user study-based evaluation demonstrates that CrowdTrace offers an effective mechanism to audit and refine multi-step crowd sensemakingmore » « less
An official website of the United States government

Full Text Available