- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000000002000000
- More
- Availability
-
02
- Author / Contributor
- Filter by Author / Creator
-
-
Popov, Vitaliy (2)
-
Cole, Michael (1)
-
Cooke, James M (1)
-
Danciu, Theodora (1)
-
Ducharme, Casey (1)
-
Falahee, Eleanor Anne (1)
-
Gabelica, Catherine (1)
-
Harmer, Bryan (1)
-
Heasman, Benjamin (1)
-
Huang, Kunpeng (1)
-
Li, Kaylee Yaxuan (1)
-
Sample, Alanson P (1)
-
Tomaka, Sarah (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Considering the criticality of post-simulation debriefings for skill development, more evidence is needed to establish how specific feedback design features might influence teams’ cognitive and metacognitive processing. The current research therefore investigates the effects of multisource feedback (MSF) and guided facilitation with video review, for both cognitive processing and reflective (meta-cognitive) behaviors during post-simulation debriefings. With a sample of 174 s-year dental students, randomly assigned to 20 teams, the authors conducted high-fidelity simulations of patient emergencies, followed by post-simulation debriefings, using a 2 × 2 factorial design to test the effects of MSF (present vs. absent) and guided facilitation with video review (present vs. absent). According to an ordered network analysis, designed to examine feedback processing levels (individual vs. team) and depth (high vs. low), as well as the presence of metacognitive reflective behaviors (evaluative behaviors, exploration of alternatives, decision-oriented behaviors), teams that received both MSF and guided facilitation demonstrated significantly deeper, team-level processing and more frequent evaluative behaviors. Teams that received only guided facilitation exhibited the highest rates of low-level, individual processing. However, facilitation also produced an additive effect that fostered reflection and a shift from individual- to team-oriented processing. In contrast, MSF alone produced the lowest levels of evaluative behaviors; without facilitation, it does not support team reflection. These results establish that combining MSF with guided facilitation and video review creates synergistic effects for team reflection. Even if MSF can highlight perceived performance discrepancies, teams need facilitation to interpret and learn collaboratively from the feedback.more » « lessFree, publicly-accessible full text available September 13, 2026
-
Li, Kaylee Yaxuan; Huang, Kunpeng; Harmer, Bryan; Ducharme, Casey; Heasman, Benjamin; Falahee, Eleanor Anne; Cooke, James M; Cole, Michael; Sample, Alanson P; Popov, Vitaliy (, ACM Transactions on Computing for Healthcare)This study introduces AutoCLC, an AI-powered system designed to assess and provide feedback on closed-loop communication (CLC) in professional learning environments. CLC, where a sender’s Call-Out statement is acknowledged by the receiver’s Check-Back statement, is a critical safety protocol in high-reliability domains, including emergency medicine resuscitation teams. Existing methods for evaluating CLC lack quantifiable metrics and depend heavily on human observation. AutoCLC addresses these limitations by leveraging natural language processing and large language models to analyze audio recordings from Advanced Cardiovascular Life Support (ACLS) simulation training. The system identifies CLC instances, measures their frequency and rate per minute, and categorizes communications as effective, incomplete, or missed. Technical evaluations demonstrate AutoCLC achieves 78.9% precision for identifying Call-Outs and 74.3% for Check-Backs, with a performance gap of only 5% compared to human annotations. A user study involving 11 cardiac arrest instructors across three training sites supported the need for automated CLC assessment. Instructors found AutoCLC reports valuable for quantifying CLC frequency and quality, as well as for providing actionable, example-based feedback. Participants rated AutoCLC highly, with a System Usability Scale score of 76.4%, reflecting above-average usability. This work represents a significant step toward developing scalable, data-driven feedback systems that enhance individual skills and team performance in high-reliability settings.more » « lessFree, publicly-accessible full text available September 23, 2026
An official website of the United States government
