skip to main content

Search for: All records

Creators/Authors contains: "Porter, John"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Redirected walking techniques use rotational gains to guide users away from physical obstacles as they walk in a virtual world, effectively creating the illusion of a larger virtual space than is physically present. Designers often want to keep users unaware of this manipulation, which is made possible by limitations in human perception that render rotational gains imperceptible below a certain threshold. Many aspects of these thresholds have been studied, however no research has yet considered whether these thresholds may change over time as users gain more experience with them. To study this, we recruited 20 novice VR users (no more than 1 hour of prior experience with an HMD) and provided them with an Oculus Quest to use for four weeks on their own time. They were tasked to complete an activity assessing their sensitivity to rotational gain once each week, in addition to whatever other activities they wanted to perform. No feedback was provided to participants about their performance during each activity, minimizing the possibility of learning effects accounting for any observed changes over time. We observed that participants became significantly more sensitive to rotation gains over time, underscoring the importance of considering prior user experience in applications involvingmore »rotational gain, as well as how prior user experience may affect other, broader applications of VR.« less
    Free, publicly-accessible full text available July 1, 2023
  2. This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of four weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a 2D plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head-hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.
    Free, publicly-accessible full text available July 1, 2023
  3. Free, publicly-accessible full text available November 1, 2022
  4. The research data repository of the Environmental Data Initiative (EDI) is building on over 30 years of data curation research and experience in the National Science Foundation-funded US Long-Term Ecological Research (LTER) Network. It provides mature functionalities, well established workflows, and now publishes all ‘long-tail’ environmental data. High quality scientific metadata are enforced through automatic checks against community developed rules and the Ecological Metadata Language (EML) standard. Although the EDI repository is far along in making its data findable, accessible, interoperable, and reusable (FAIR), representatives from EDI and the LTER are developing best practices for the edge cases in environmental data publishing. One of these is the vast amount of imagery taken in the context of ecological research, ranging from wildlife camera traps to plankton imaging systems to aerial photography. Many images are used in biodiversity research for community analyses (e.g., individual counts, species cover, biovolume, productivity), while others are taken to study animal behavior and landscape-level change. Some examples from the LTER Network include: using photos of a heron colony to measure provisioning rates for chicks (Clarkson and Erwin 2018) or identifying changes in plant cover and functional type through time (Peters et al. 2020). Multi-spectral images are employedmore »to identify prairie species. Underwater photo quads are used to monitor changes in benthic biodiversity (Edmunds 2015). Sosik et al. (2020) used a continuous Imaging FlowCytobot to identify and measure phyto- and microzooplankton. Cameras at McMurdo Dry Valleys assess snow and ice cover on Antarctic lakes allowing estimation of primary production (Myers 2019). It has been standard practice to publish numerical data extracted from images in EDI; however, the supporting imagery generally has not been made publicly available. Our goal in developing best practices for documenting and archiving these images is for them to be discovered and re-used. Our examples demonstrate several issues. The research questions, and hence, the image subjects are variable. Images frequently come in logical sets of time series. The size of such sets can be large and only some images may be contributed to a dedicated specialized repository. Finally, these images are taken in a larger monitoring context where many other environmental data are collected at the same time and location. Currently, a typical approach to publishing image data in EDI are packages containing compressed (ZIP or tar) files with the images, a directory manifest with additional image-specific metadata, and a package-level EML metadata file. Images in the compressed archive may be organized within directories with filenames corresponding to treatments, locations, time periods, individuals, or other grouping attributes. Additionally, the directory manifest table has columns for each attribute. Package-level metadata include standard coverage elements (e.g., date, time, location) and sampling methods. This approach of archiving logical ‘sets’ of images reduces the effort of providing metadata for each image when most information would be repeated, but at the expense of not making every image individually searchable. The latter may be overcome if the provided manifest contains standard metadata that would allow searching and automatic integration with other images.« less
  5. Virtual reality games have grown rapidly in popularity since the first consumer VR head-mounted displays were released in 2016, however comparatively little research has explored how this new medium impacts the experience of players. In this paper, we present a study exploring how user experience changes when playing Minecraft on the desktop and in immersive virtual reality. Fourteen players completed six 45 minute sessions, three played on the desktop and three in VR. The Gaming Experience Questionnaire, the i-Group presence questionnaire, and the Simulator Sickness Questionnaire were administered after each session, and players were interviewed at the end of the experiment. Participants strongly preferred playing Minecraft in VR, despite frustrations with using teleporation as a travel technique and feelings of simulator sickness. Players enjoyed using motion controls, but still continued to use indirect input under certain circumstances. This did not appear to negatively impact feelings of presence. We conclude with four lessons for game developers interested in porting their games to virtual reality.