skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: MobiSweep: Exploring Spatial Design Ideation Using a Smartphone as a Hand-held Reference Plane
In this paper, we explore quick 3D shape composition during early-phase spatial design ideation. Our approach is to re-purpose a smartphone as a hand-held reference plane for creating, modifying, and manipulating 3D sweep surfaces. We implemented MobiSweep, a prototype application to explore a new design space of constrained spatial interactions that combine direct orientation control with indirect position control via well-established multi-touch gestures. MobiSweep leverages kinesthetically aware interactions for the creation of a sweep surface without explicit position tracking. The design concepts generated by users, in conjunction with their feedback, demonstrate the potential of such interactions in enabling spatial ideation.  more » « less
Award ID(s):
1632154 1329979 1538868
PAR ID:
10041306
Author(s) / Creator(s):
Date Published:
Journal Name:
TEI '16 Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction
Page Range / eLocation ID:
12-20
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion. 
    more » « less
  2. Virtual reality (VR) offers potential as a prototyping tool for human-robot interaction. We explored a way to utilize human-centered design (HCD) methodology to develop a collaborative VR game for understanding teens’ perceptions of, and interactions with, social robots. Our paper features three stages of the design process for teen-robot interaction in VR; ideation, prototyping, and game development. In the ideation stage, we identified three important design principles: collaboration, customization, and robot characterization. In the prototyping stage, we developed a card game, conducted gameplay, and confirmed our design principles. Finally, we developed a low-fidelity VR game and received teens’ feedback. This exploratory study highlights the potential of VR, both for collaborative robot design and teen-robot interaction studies. 
    more » « less
  3. Ideation is a key phase in engineering design and brainstorming is an established method for ideation. A limitation of the brainstorming process is idea production tends to peak at the beginning and quickly decreases with time. In this exploratory study, we tested an innovative technique to sustain ideation by providing designers feedback about their neurocognition. We used a neuroimaging technique (fNIRS) to monitor students’ neurocognitive activations during a brainstorming task. Half received real-time feedback about their neurocognitive activation in their prefrontal cortex, a brain region associated with working memory and cognitive flexibility. Students who received the neurocognitive feedback maintained higher cortical activation and longer sustained peak activation. Students receiving the neurocognitive feedback demonstrated a higher percentage of right-hemispheric dominance, a region associated to creative processing, compared to the students without neurocognitive feedback. The increase in right-hemispheric dominance positively correlated with an increase in the number of solutions during concept generation and a higher design idea fluency. These results demonstrate the prospective use of neurocognitive feedback to sustain the cognitive activations necessary for idea generation during brainstorming. Future research should explore the effect of neurocognitive feedback with a more robust sample of designers and compare neurocognitive feedback with other types of interventions to sustain ideation. 
    more » « less
  4. Virtual reality (VR) offers potential as a collaborative tool for both technology design and human-robot interaction. We utilized a participatory, human-centered design (HCD) methodology to develop a collaborative, asymmetric VR game to explore teens’ perceptions of, and interactions with, social robots. Our paper illustrates three stages of our design process; ideation, prototyping, and usability testing with users. Through these stages we identified important design requirements for our mid-fidelity environment. We then describe findings from our pilot test of the mid-fidelity VR game with teens. Due to the unique asymmetric virtual reality design, we observed successful collaborations, and interesting collaboration styles across teens. This study highlights the potential for asymmetric VR as a collaborative design tool as well as an appropriate medium for successful teen-to-teen collaboration. 
    more » « less
  5. Abstract Inspirational stimuli are known to be effective in supporting ideation during early-stage design. However, prior work has predominantly constrained designers to using text-only queries when searching for stimuli, which is not consistent with real-world design behavior where fluidity across modalities (e.g., visual, semantic, etc.) is standard practice. In the current work, we introduce a multi-modal search platform that retrieves inspirational stimuli in the form of 3D-model parts using text, appearance, and function-based search inputs. Computational methods leveraging a deep-learning approach are presented for designing and supporting this platform, which relies on deep-neural networks trained on a large dataset of 3D-model parts. This work further presents the results of a cognitive study ( n = 21) where the aforementioned search platform was used to find parts to inspire solutions to a design challenge. Participants engaged with three different search modalities: by keywords, 3D parts, and user-assembled 3D parts in their workspace. When searching by parts that are selected or in their workspace, participants had additional control over the similarity of appearance and function of results relative to the input. The results of this study demonstrate that the modality used impacts search behavior, such as in search frequency, how retrieved search results are engaged with, and how broadly the search space is covered. Specific results link interactions with the interface to search strategies participants may have used during the task. Findings suggest that when searching for inspirational stimuli, desired results can be achieved both by direct search inputs (e.g., by keyword) as well as by more randomly discovered examples, where a specific goal was not defined. Both search processes are found to be important to enable when designing search platforms for inspirational stimuli retrieval. 
    more » « less