Title: Implementing Distributed Feedback in a Tool that Supports Peer-to-Peer Simulation in Healthcare
MedDbriefer is a web based ITS designed to enable healthcare students to do clinical scenarios anytime, anywhere. While one student “voice treats” a scenario’s patient(s) as the leader of a mock Emergency Medical Services (EMS) team, a peer records the team’s actions by using the system’s checklists, on a tablet. When the scenario ends, MedDbriefer analyzes the event log and generates a debriefing. MedDbriefer also provides a platform for research on simulation-based training. This paper describes how the system’s debriefing engine could be extended to deliver feedback during a scenario, as well as afterwards. MedDbriefer could then be used to compare the effectiveness of different ways of timing feedback delivery in computer-based simulation systems. more »« less
Katz, S.; Albacete, P.; Jordan, P.; Silliman, S.; Yang, T.
(, Proceedings of the Twenty-third International Conference of the Learning Sciences—ICLS 2023)
Bilkstein, P.; Van Aaist, J.; Kizito, R.; Brennan, K.
(Ed.)
MedDbriefer allows paramedic students to engage in simulated prehospital emergency care scenarios and receive an automated debriefing on their performance. It is a web-based tool that runs on a tablet. Although debriefing is purported to be one of simulation-based training’s most critical components, there is little empirical research to guide human and automated debriefing. We implemented two approaches to debriefing in MedDbriefer and are conducting a randomized controlled trial to compare their effectiveness.
Katz, S.; Albacete, P.; Gallagher, J.; Jordan, P.; Platt, T.; Silliman, S.; Yang, T.
(, Intelligent Tutoring Systems: 18th International Conference, ITS 2022)
This poster describes an early-stage project. It introduces MedDbriefer, a tablet-based tool that allows small groups of paramedic students to practice realistic prehospital emergency care scenarios. While two or more students collaborate as members of an emergency medical service (EMS) team, a peer uses the tablet’s checklists to record the team’s actions. The system then analyzes the event log to provide an automated debriefing on the team’s performance. Although debriefing is purported to be one of simulation-based training’s most critical components, there is little research to guide human and automated debriefing. We are imple-menting two approaches to automated debriefing and will compare their effective-ness in an upcoming randomized controlled trial.
This NSF-funded study aims to develop and evaluate a novel debriefing system that aims to capture and visualize multimodal data streams from multi-user VR environment that evaluate learners’ cognitive (clinical decision-making) and behavioral (situational awareness, communication) processes to provide data-informed feedback focused on improving team-based care of patients who suffer sudden medical emergencies. Through this new multimodal debriefing system, instructors will be able to provide personalized feedback to clinicians during post-simulation debriefing sessions.
Packard, Becky Wai-Ling; Montgomery, Beronda L.; Mondisa, Joi-Lynn
(, International Journal of Mentoring and Coaching in Education)
PurposeThe purpose of this study was to examine the experiences of multiple campus teams as they engaged in the assessment of their science, technology, engineering and mathematics (STEM) mentoring ecosystems within a peer assessment dialogue exercise. Design/methodology/approachThis project utilized a qualitative multicase study method involving six campus teams, drawing upon completed inventory and visual mapping artefacts, session observations and debriefing interviews. The campuses included research universities, small colleges and minority-serving institutions (MSIs) across the United States of America. The authors analysed which features of the peer assessment dialogue exercise scaffolded participants' learning about ecosystem synergies and threats. FindingsThe results illustrated the benefit of instructor modelling, intra-team process time and multiple rounds of peer assessment. Participants gained new insights into their own campuses and an increased sense of possibility by dialoguing with peer campuses. Research limitations/implicationsThis project involved teams from a small set of institutions, relying on observational and self-reported debriefing data. Future research could centre perspectives of institutional leaders. Practical implicationsThe authors recommend dedicating time to the institutional assessment of mentoring ecosystems. Investing in a campus-wide mentoring infrastructure could align with campus equity goals. Originality/valueIn contrast to studies that have focussed solely on programmatic outcomes of mentoring, this study explored strategies to strengthen institutional mentoring ecosystems in higher education, with a focus on peer assessment, dialogue and learning exercises.
Katz, S.; Jordan, P.; Silliman, S.
(, Proceedings of the 3rd International Conference on Novel and Intelligent Digital Systems, NIDS 2023)
Across the healthcare professions, many students don’t get enough practice doing simulated clinical interactions during course labs to feel confident about passing certification exams and treating actual patients. To address this problem, we are developing MedDbriefer, a web-based tutoring system that runs on a tablet. MedDbriefer allows peers to engage in supplemental clinical scenarios on their own. With its current focus on paramedic train-ing, one student “voice treats” a simulated patient as the leader of a mock emergency medical services team while a peer uses MedDbriefer’s check-lists to log the team leader’s verbalized actions. The system then analyzes the event log and generates a debriefing, which highlights errors such as as-sessment actions and treatment interventions that the team leader missed or performed late. This paper focuses on how the system analyzes event logs to generate adaptive debriefings.
Katz, Sandra, Albacete, Patricia, Jordan, Pamela, Silliman, Scott, and Wrzesniewski, Matthew. Implementing Distributed Feedback in a Tool that Supports Peer-to-Peer Simulation in Healthcare. Retrieved from https://par.nsf.gov/biblio/10545427.
Katz, Sandra, Albacete, Patricia, Jordan, Pamela, Silliman, Scott, and Wrzesniewski, Matthew.
"Implementing Distributed Feedback in a Tool that Supports Peer-to-Peer Simulation in Healthcare". Country unknown/Code not available: Springer, Cham. https://par.nsf.gov/biblio/10545427.
@article{osti_10545427,
place = {Country unknown/Code not available},
title = {Implementing Distributed Feedback in a Tool that Supports Peer-to-Peer Simulation in Healthcare},
url = {https://par.nsf.gov/biblio/10545427},
abstractNote = {MedDbriefer is a web based ITS designed to enable healthcare students to do clinical scenarios anytime, anywhere. While one student “voice treats” a scenario’s patient(s) as the leader of a mock Emergency Medical Services (EMS) team, a peer records the team’s actions by using the system’s checklists, on a tablet. When the scenario ends, MedDbriefer analyzes the event log and generates a debriefing. MedDbriefer also provides a platform for research on simulation-based training. This paper describes how the system’s debriefing engine could be extended to deliver feedback during a scenario, as well as afterwards. MedDbriefer could then be used to compare the effectiveness of different ways of timing feedback delivery in computer-based simulation systems.},
journal = {},
volume = {14798)},
publisher = {Springer, Cham},
author = {Katz, Sandra and Albacete, Patricia and Jordan, Pamela and Silliman, Scott and Wrzesniewski, Matthew},
editor = {Sifaleras, A and Lin, F}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.