Title: Comparing alternative approaches to debriefing in a tool to support peer-led simulation-based training
This poster describes an early-stage project. It introduces MedDbriefer, a tablet-based tool that allows small groups of paramedic students to practice realistic prehospital emergency care scenarios. While two or more students collaborate as members of an emergency medical service (EMS) team, a peer uses the tablet’s checklists to record the team’s actions. The system then analyzes the event log to provide an automated debriefing on the team’s performance. Although debriefing is purported to be one of simulation-based training’s most critical components, there is little research to guide human and automated debriefing. We are imple-menting two approaches to automated debriefing and will compare their effective-ness in an upcoming randomized controlled trial. more »« less
Katz, S.; Albacete, P.; Jordan, P.; Silliman, S.; Yang, T.
(, Proceedings of the Twenty-third International Conference of the Learning Sciences—ICLS 2023)
Bilkstein, P.; Van Aaist, J.; Kizito, R.; Brennan, K.
(Ed.)
MedDbriefer allows paramedic students to engage in simulated prehospital emergency care scenarios and receive an automated debriefing on their performance. It is a web-based tool that runs on a tablet. Although debriefing is purported to be one of simulation-based training’s most critical components, there is little empirical research to guide human and automated debriefing. We implemented two approaches to debriefing in MedDbriefer and are conducting a randomized controlled trial to compare their effectiveness.
MedDbriefer is a web based ITS designed to enable healthcare students to do clinical scenarios anytime, anywhere. While one student “voice treats” a scenario’s patient(s) as the leader of a mock Emergency Medical Services (EMS) team, a peer records the team’s actions by using the system’s checklists, on a tablet. When the scenario ends, MedDbriefer analyzes the event log and generates a debriefing. MedDbriefer also provides a platform for research on simulation-based training. This paper describes how the system’s debriefing engine could be extended to deliver feedback during a scenario, as well as afterwards. MedDbriefer could then be used to compare the effectiveness of different ways of timing feedback delivery in computer-based simulation systems.
Chen, Zirong; An, Ziyan; Reynolds, Jennifer; Mullen, Kristin; Martini, Stephen; Ma, Meiyi
(, Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence)
Emergency response services are critical to public safety, with 9-1-1 call-takers playing a key role in ensuring timely and effective emergency operations. To ensure call-taking performance consistency, quality assurance is implemented to evaluate and refine call-takers' skillsets. However, traditional human-led evaluations struggle with high call volumes, leading to low coverage and delayed assessments. We introduce LogiDebrief, an AI-driven framework that automates traditional 9-1-1 call debriefing by integrating Signal-Temporal Logic (STL) with Large Language Models (LLMs) for fully-covered rigorous performance evaluation. LogiDebrief formalizes call-taking requirements as logical specifications, enabling systematic assessment of 9-1-1 calls against procedural guidelines. It employs a three-step verification process: (1) contextual understanding to identify responder types, incident classifications, and critical conditions; (2) STL-based runtime checking with LLM integration to ensure compliance; and (3) automated aggregation of results into quality assurance reports. Beyond its technical contributions, LogiDebrief has demonstrated real-world impact. Successfully deployed at Metro Nashville Department of Emergency Communications, it has assisted in debriefing 1,701 real-world calls, saving 311.85 hours of active engagement. Empirical evaluation with real-world data confirms its accuracy, while a case study and extensive user study highlight its effectiveness in enhancing call-taking performance.
Katz, S.; Jordan, P.; Silliman, S.
(, Proceedings of the 3rd International Conference on Novel and Intelligent Digital Systems, NIDS 2023)
Across the healthcare professions, many students don’t get enough practice doing simulated clinical interactions during course labs to feel confident about passing certification exams and treating actual patients. To address this problem, we are developing MedDbriefer, a web-based tutoring system that runs on a tablet. MedDbriefer allows peers to engage in supplemental clinical scenarios on their own. With its current focus on paramedic train-ing, one student “voice treats” a simulated patient as the leader of a mock emergency medical services team while a peer uses MedDbriefer’s check-lists to log the team leader’s verbalized actions. The system then analyzes the event log and generates a debriefing, which highlights errors such as as-sessment actions and treatment interventions that the team leader missed or performed late. This paper focuses on how the system analyzes event logs to generate adaptive debriefings.
Dubrow, S.; Bannan, B.
(, Lecture notes in computer science)
This chapter provides an overview of an exploratory case study involving a multiteam system in the fire and rescue emergency context incorporating human sensor analytics (e.g., proximity sensors) and other data sources to reveal important insights on within- and between-team learning and training. Incorporating a design research approach, the case study consisting of two live simulation scenarios that informed the design and development of a wearable technology-based system targeted to capture team-based behavior in the live simulation and visualize it during the debriefing session immediately following to potentially inform within- and cross-team behavior from a multiteam systems perspective informed by theory and practice.
Katz, S., Albacete, P., Gallagher, J., Jordan, P., Platt, T., Silliman, S., and Yang, T. Comparing alternative approaches to debriefing in a tool to support peer-led simulation-based training. Retrieved from https://par.nsf.gov/biblio/10443701. Intelligent Tutoring Systems: 18th International Conference, ITS 2022 . Web. doi:10.1007/978-3-031-09680-8_8.
Katz, S., Albacete, P., Gallagher, J., Jordan, P., Platt, T., Silliman, S., & Yang, T. Comparing alternative approaches to debriefing in a tool to support peer-led simulation-based training. Intelligent Tutoring Systems: 18th International Conference, ITS 2022, (). Retrieved from https://par.nsf.gov/biblio/10443701. https://doi.org/10.1007/978-3-031-09680-8_8
Katz, S., Albacete, P., Gallagher, J., Jordan, P., Platt, T., Silliman, S., and Yang, T.
"Comparing alternative approaches to debriefing in a tool to support peer-led simulation-based training". Intelligent Tutoring Systems: 18th International Conference, ITS 2022 (). Country unknown/Code not available. https://doi.org/10.1007/978-3-031-09680-8_8.https://par.nsf.gov/biblio/10443701.
@article{osti_10443701,
place = {Country unknown/Code not available},
title = {Comparing alternative approaches to debriefing in a tool to support peer-led simulation-based training},
url = {https://par.nsf.gov/biblio/10443701},
DOI = {10.1007/978-3-031-09680-8_8},
abstractNote = {This poster describes an early-stage project. It introduces MedDbriefer, a tablet-based tool that allows small groups of paramedic students to practice realistic prehospital emergency care scenarios. While two or more students collaborate as members of an emergency medical service (EMS) team, a peer uses the tablet’s checklists to record the team’s actions. The system then analyzes the event log to provide an automated debriefing on the team’s performance. Although debriefing is purported to be one of simulation-based training’s most critical components, there is little research to guide human and automated debriefing. We are imple-menting two approaches to automated debriefing and will compare their effective-ness in an upcoming randomized controlled trial.},
journal = {Intelligent Tutoring Systems: 18th International Conference, ITS 2022},
author = {Katz, S. and Albacete, P. and Gallagher, J. and Jordan, P. and Platt, T. and Silliman, S. and Yang, T.},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.