skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Toups, Z O."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In multiplayer collaborative games, players need to coordinate their actions and synchronize their efforts effectively to succeed as a team; thus, individual differences can impact teamwork and gameplay. This article investigates the effects of cognitive styles on teams engaged in collaborative gaming activities. Fifty-four individuals took part in a mixed-methods user study; they were classified as field-dependent (FD) or independent (FI) based on a field-dependent–independent (FD-I) cognitive-style-elicitation instrument. Three groups of teams were formed, based on the cognitive style of each team member: FD-FD, FD-FI, and FI-FI. We examined collaborative gameplay in terms of team performance, cognitive load, communication, and player experience. The analysis revealed that FD-I cognitive style affected the performance and mental load of teams. We expect the findings to provide useful insights on understanding how cognitive styles influence collaborative gameplay. 
    more » « less
  2. null (Ed.)
  3. Game map interfaces provide an alternative perspective on the worlds players inhabit.compared to navigation applications popular in day-to-day life, game maps have different affordances to match players' situated goals. To contextualize and understand these differences and how they developed, we present a historical chronicle of game map interfaces. Starting from how games came to involve maps, we trace how maps are first separate from the game, becoming more and more integrated into play until converging in smartphone-style interfaces. We synthesize several game history texts with critical engagement with 123 key games to develop this map-focused chronicle, from which we highlight trends and opportunities for future map designs. Our work contributes a record of trends in game map interfaces that can serve as a source of reference and inspiration to game designers, digital physical-world map designers, and game scholars. 
    more » « less
  4. null (Ed.)
    Autonomous robotic vehicles (i.e., drones) are potentially transformative for search and rescue (SAR). This paper works toward wearable interfaces, through which humans team with multiple drones. We introduce the Virtual Drone Search Game as a first step in creating a mixed reality simulation for humans to practice drone teaming and SAR techniques. Our goals are to (1) evaluate input modalities for the drones, derived from an iterative narrowing of the design space, (2) improve our mixed reality system for designing input modalities and training operators, and (3) collect data on how participants socially experience the virtual drones with which they work. In our study, 17 participants played the game with two input modalities (Gesture condition, Tap condition) in counterbalanced order. Results indicated that participants performed best with the Gesture condition. Participants found the multiple controls challenging, and future studies might include more training of the devices and game. Participants felt like a team with the drones and found them moderately agentic. In our future work, we will extend this testing to a more externally valid mixed reality game. 
    more » « less
  5. Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices to be used in games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand-gesture devices: Leap Motion, Microsoft’s Kinect, and Intel’s RealSense. The comparison provides game designers with an understanding of the main factors to consider when selecting these devices and how to design games that use them. We developed a simple hand-gesture-based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings were supported by players’ accounts of their experiences using these gesture devices. Based on these findings, we discuss how such devices can be used by game designers and provide them with a set of design cautions that provide insights into the design of gesture-based games. 
    more » « less
  6. Composite wearable computers combine multiple wearable devices to form a cohesive whole. Designing these complex systems and integrating devices to effectively leverage their affordances is nontrivial. To inform the design of composite wearable computers, we undertook a grounded theory analysis of 84 wearable input devices drawing from 197 data sources, including technical specifications, research papers, and instructional videos. The resulting prescriptive design framework consists of four axes: type of interactivity, associated output modalities, mobility, and body location. This framework informs a composition-based approach to the design of wearable computers, enabling designers to identify which devices fill particular user needs and design constraints. Using this framework, designers can understand the relationship between the wearable, the user, and the environment, identify limitations in available wearable devices, and gain insights into how to address design challenges developers will likely encounter. 
    more » « less
  7. Search and rescue (SAR) operations are often nearly computer-technology-free due to the fragility and connectivity needs of current information communication technology (ICT). In this design fiction, we envision a world where SAR uses augmented reality (AR) and the surplus labor of volunteers during crisis response efforts. Unmanned aerial vehicles, crowdsourced mapping platforms, and concepts from video game mapping technologies can all be mixed to keep SAR operations complexity-free while incorporating ICTs. Our scenario describes a near-future SAR operation with currently available technology being assembled and deployed without issue. After our scenario, we discuss socio-technical barriers for technology use like technical fragility and overwhelming complexity. We also discuss how to work around those barriers and how to use video games as a testbed for SAR technology. We hope to inspire more resilient ICT design that is accessible without training. 
    more » « less
  8. Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices for games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand gesture devices: Leap Motion, Microsoft's Kinect, and Intel's RealSense. We developed a simple hand-gesture based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants' preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings suggest that not all gesture recognition devices can be suitable for games and that designers need to make better decisions when selecting gesture recognition devices and designing gesture based games to insure the usability, accuracy, and comfort of such games. 
    more » « less
  9. Composite wearable computers consist of multiple wearable devices connected together and working as a cohesive whole. These composite wearable computers are promising for augmenting our interaction with the physical, virtual, and mixed play spaces (e.g., mixed reality games). Yet little research has directly addressed how mixed reality system designers can select wearable input devices and how these devices can be assembled together to form a cohesive wearable computer. We present an initial taxonomy of wearable input devices to aid designers in deciding which devices to select and assemble together to support different mixed reality systems. We undertook a grounded theory analysis of 84 different wearable input devices resulting in a design taxonomy for composite wearable computers. The taxonomy consists of two axes: TYPE OF INTERACTIVITY and BODY LOCATION. These axes enable designers to identify which devices fill particular needs in the system development process and how these devices can be assembled together to form a cohesive wearable computer. 
    more » « less