skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Directing Tangible Controllers with Computer Vision and Beholder
We present Beholder, a computer vision (CV) toolkit for building tangible controllers for interactive computer systems. Beholder facilitates designers to build physical inputs that are instrumented with CV markers. By observing the properties of these markers, a CV system can detect physical interactions that occur. Beholder provides a software editor that enables designers to map CV marker behavior to keyboard events; thus connecting the CV-driven tangible controllers to any software that responds to keyboard input. We propose three design scenarios for Beholder—controllers to support everyday work, alternative controllers for games, and transforming physical therapy equipment into controllers to monitor patient progress.  more » « less
Award ID(s):
2040489
PAR ID:
10484709
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9781450394727
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Location:
Daegu Republic of Korea
Sponsoring Org:
National Science Foundation
More Like this
  1. The electronics-centered approach to physical computing presents challenges when designers build tangible interactive systems due to its inherent emphasis on circuitry and electronic components. To explore an alternative physical computing approach we have developed a computer vision (CV) based system that uses a webcam, computer, and printed fiducial markers to create functional tangible interfaces. Through a series of design studios, we probed how designers build tangible interfaces with this CV-driven approach. In this paper, we apply the annotated portfolio method to reflect on the fifteen outcomes from these studios. We observed that CV markers offer versatile materiality for tangible interactions, afford the use of democratic materials for interface construction, and engage designers in embodied debugging with their own vision as a proxy for CV. By sharing our insights, we inform other designers and educators who seek alternative ways to facilitate physical computing and tangible interaction design. 
    more » « less
  2. Physical computing enables learners to create interactive projects using tangible materials and electronic components. These projects commonly utilize microcontroller boards like the micro:bit. In contrast, computer vision (CV) is a powerful technique for detecting input through interaction with everyday materials like paper, and it can be utilized for physical computing projects. However, CV-based toolkits are typically limited to input detection and rely on screen-based or projected outputs. This paper presents a hybrid approach that integrates a CV-based platform called Paper Playground with the micro:bit electronics platform. By combining CV-detected, paper-based inputs with the rich input-output possibilities of microcontroller-based systems, we showcase a multimodal physical computing toolkit. Through three project examples, we explore how this hybrid approach can enhance the creative possibilities in physical computing, and develop a preliminary design space combining CV-based and electronics-based physical computing. 
    more » « less
  3. Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactionsfavored the high-embodied experimental group that used the MR devicefor both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces. 
    more » « less
  4. Computer Vision (CV) is used in a broad range of Cyber-Physical Systems such as surgical and factory floor robots and autonomous vehicles including small Unmanned Aerial Systems (sUAS). It enables machines to perceive the world by detecting and classifying objects of interest, reconstructing 3D scenes, estimating motion, and maneuvering around objects. CV algorithms are developed using diverse machine learning and deep learning frameworks, which are often deployed on limited resource edge devices. As sUAS rely upon an accurate and timely perception of their environment to perform critical tasks, problems related to CV can create hazardous conditions leading to crashes or mission failure. In this paper, we perform a systematic literature review (SLR) of CV-related challenges associated with CV, hardware, and software engineering. We then group the reported challenges into five categories and fourteen sub-challenges and present existing solutions. As current literature focuses primarily on CV and hardware challenges, we close by discussing implications for Software Engineering, drawing examples from a CV-enhanced multi-sUAS system. 
    more » « less
  5. null (Ed.)
    We investigate typing on a QWERTY keyboard rendered in virtual reality. Our system tracks users’ hands in the virtual environment via a Leap Motion mounted on the front of a head mounted display. This allows typing on an auto-correcting midair keyboard without the need for auxiliary input devices such as gloves or handheld controllers. It supports input via the index fingers of one or both hands. We compare two keyboard designs: a normal QWERTY layout and a split layout. We found users typed at around 16 words-per-minute using one or both index fingers on the normal layout, and about 15 words-per-minute using both index fingers on the split layout. Users had a corrected error rate below 2% in all cases. To explore midair typing with limited or no visual feedback, we had users type on an invisible keyboard. Users typed on this keyboard at 11 words-per-minute at an error rate of 3.3% despite the keyboard providing almost no visual feedback. 
    more » « less