- Award ID(s):
- 2040489
- PAR ID:
- 10484709
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9781450394727
- Page Range / eLocation ID:
- 1 to 2
- Format(s):
- Medium: X
- Location:
- Daegu Republic of Korea
- Sponsoring Org:
- National Science Foundation
More Like this
-
The electronics-centered approach to physical computing presents challenges when designers build tangible interactive systems due to its inherent emphasis on circuitry and electronic components. To explore an alternative physical computing approach we have developed a computer vision (CV) based system that uses a webcam, computer, and printed fiducial markers to create functional tangible interfaces. Through a series of design studios, we probed how designers build tangible interfaces with this CV-driven approach. In this paper, we apply the annotated portfolio method to reflect on the fifteen outcomes from these studios. We observed that CV markers offer versatile materiality for tangible interactions, afford the use of democratic materials for interface construction, and engage designers in embodied debugging with their own vision as a proxy for CV. By sharing our insights, we inform other designers and educators who seek alternative ways to facilitate physical computing and tangible interaction design.more » « less
-
Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactions
favored the high-embodied experimental group that used the MR device for both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces. -
Computer Vision (CV) is used in a broad range of Cyber-Physical Systems such as surgical and factory floor robots and autonomous vehicles including small Unmanned Aerial Systems (sUAS). It enables machines to perceive the world by detecting and classifying objects of interest, reconstructing 3D scenes, estimating motion, and maneuvering around objects. CV algorithms are developed using diverse machine learning and deep learning frameworks, which are often deployed on limited resource edge devices. As sUAS rely upon an accurate and timely perception of their environment to perform critical tasks, problems related to CV can create hazardous conditions leading to crashes or mission failure. In this paper, we perform a systematic literature review (SLR) of CV-related challenges associated with CV, hardware, and software engineering. We then group the reported challenges into five categories and fourteen sub-challenges and present existing solutions. As current literature focuses primarily on CV and hardware challenges, we close by discussing implications for Software Engineering, drawing examples from a CV-enhanced multi-sUAS system.more » « less
-
null (Ed.)We investigate typing on a QWERTY keyboard rendered in virtual reality. Our system tracks users’ hands in the virtual environment via a Leap Motion mounted on the front of a head mounted display. This allows typing on an auto-correcting midair keyboard without the need for auxiliary input devices such as gloves or handheld controllers. It supports input via the index fingers of one or both hands. We compare two keyboard designs: a normal QWERTY layout and a split layout. We found users typed at around 16 words-per-minute using one or both index fingers on the normal layout, and about 15 words-per-minute using both index fingers on the split layout. Users had a corrected error rate below 2% in all cases. To explore midair typing with limited or no visual feedback, we had users type on an invisible keyboard. Users typed on this keyboard at 11 words-per-minute at an error rate of 3.3% despite the keyboard providing almost no visual feedback.more » « less
-
Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical WorldWe present, Window-Shaping, a tangible mixed-reality (MR) interaction metaphor for design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, our metaphor enables quick design of dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects without the need for 3D reconstruction or fiducial markers. Through a preliminary evaluation of our prototype application we demonstrate the expressiveness provided by our design workflow, the effectiveness of our interaction scheme, and the potential of our metaphor.more » « less