skip to main content


This content will become publicly available on April 19, 2024

Title: Collaborative Online Learning with VR Video: Roles of Collaborative Tools and Shared Video Control
Award ID(s):
2106090
NSF-PAR ID:
10442800
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
ACM CHI 2023
Page Range / eLocation ID:
1 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Collaborating scientists and storytellers successfully built a university-based science-in-action video storytelling model to test the research question: Can university scientists increase their relatability and public engagement through science-in-action video storytelling? Developed over 14 years, this science storytelling model produced more than a dozen high-visibility narratives that translated science to the public and featured scientists, primarily environmental and climate scientists, who are described in audience surveys as relatable people. This collaborative model, based on long-term trusting partnerships between scientists and video storytellers, documented scientists as they conducted their research and together created narratives intended to humanize scientists as authentic people on journeys of discovery. Unlike traditional documentary filmmaking or journalism, the participatory nature of this translational science model involved scientists in the shared making of narratives to ensure the accuracy of the story's science content. Twelve science and research video story products have reached broad audiences through a variety of venues including television and online streaming platforms such as Public Broadcasting Service (PBS), Netflix, PIVOT TV, iTunes, and Kanopy. With a reach of over 180 million potential public audience viewers, we have demonstrated the effectiveness of this model to produce science and environmental narratives that appeal to the public. Results from post-screening surveys with public, high school, and undergraduate audiences showed perceptions of scientists as relatable. Our data includes feedback from undergraduate and high school students who participated in the video storytelling processes and reported increased relatability to both scientists and science. In 2022, we surveyed undergraduate students using a method that differentiated scientists' potential relatable qualities with scientists' passion for their work, and the scientists' motivation to help others, consistently associated with relatability. The value of this model to scientists is offered throughout this paper as two of our authors are biological scientists who were featured in our original science-in-action videos. Additionally, this model provides a time-saving method for scientists to communicate their research. We propose that translational science stories created using this model may provide audiences with opportunities to vicariously experience scientists' day-to-day choices and challenges and thus may evoke audiences' ability to relate to, and trust in, science. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
  4. Captions play a major role in making educational videos accessible to all and are known to benefit a wide range of learners. However, many educational videos either do not have captions or have inaccurate captions. Prior work has shown the benefits of using crowdsourcing to obtain accurate captions in a cost-efficient way, though there is a lack of understanding of how learners edit captions of educational videos either individually or collaboratively. In this work, we conducted a user study where 58 learners (in a course of 387 learners) participated in the editing of captions in 89 lecture videos that were generated by Automatic Speech Recognition (ASR) technologies. For each video, different learners conducted two rounds of editing. Based on editing logs, we created a taxonomy of errors in educational video captions (e.g., Discipline-Specific, General, Equations). From the interviews, we identified individual and collaborative error editing strategies. We then further demonstrated the feasibility of applying machine learning models to assist learners in editing. Our work provides practical implications for advancing video-based learning and for educational video caption editing. 
    more » « less