skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 1559889

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. To examine perceptions of faculty mentors of undergraduate research and their supervisors, this work discusses the results of surveys administered after 3 years of a summer CS-focused REU Site program. One survey was completed by administrators of faculty research mentors–deans and chairs–and the other was completed by faculty mentors. The surveys indicated a disconnect between how the groups assessed undergraduate research mentoring as an indicator of faculty productivity, and overt vs. covert recognition of undergraduate mentoring. Additional topics explored the effectiveness of internal communication of program outcomes and ways to improve it, as well as post-program continued mentoring engagement linking to perceptions of long-term student benefits. 
    more » « less
  2. In this paper, we describe a methodology for determining audience engagement designed specifically for stage performances in a virtual space. We use a combination of galvanic skin response data (GSR), self-reported emotional feedback using the positive and negative affect schedule (PANAS), and a think aloud methodology to assess user reaction to the virtual reality experience. We describe a case study that uses the process to explore the role of immersive viewing of a performance by comparing users’ engagement while watching a virtual dance performances on a monitor vs. using an immersive head mounted display (HMD). Results from the study indicate significant differences between the viewing experiences. The process can serve as a potential tool in the development of a VR storytelling experience. 
    more » « less
  3. While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and self-reported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers. 
    more » « less