Much computational work has been done on identifying and interpreting the meaning of metaphors, but little work has been done on understanding the motivation behind the use of metaphor. To computationally model discourse and social positioning in metaphor, we need a corpus annotated with metaphors relevant to speaker intentions. This paper reports a corpus study as a first step towards computational work on social and discourse functions of metaphor. We use Amazon Mechanical Turk (MTurk) to annotate data from three web discussion forums covering distinct domains. We then compare these to annotations from our own annotation scheme which distinguish levels of metaphor with the labels: nonliteral, conventionalized, and literal. Our hope is that this work raises questions about what new work needs to be done in order to address the question of how metaphors are used to achieve social goals in interaction.
more »
« less
Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World
We present, Window-Shaping, a tangible mixed-reality (MR) interaction metaphor for design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, our metaphor enables quick design of dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects without the need for 3D reconstruction or fiducial markers. Through a preliminary evaluation of our prototype application we demonstrate the expressiveness provided by our design workflow, the effectiveness of our interaction scheme, and the potential of our metaphor.
more »
« less
- PAR ID:
- 10041308
- Date Published:
- Journal Name:
- Tangible and Embedded Interaction
- Page Range / eLocation ID:
- 37-45
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Efficient performance and acquisition of physical skills, from sports techniques to surgical procedures, require instruction and feedback. In the absence of a human expert, Mixed Reality Intelligent Task Support (MixITS) can offer a promising alternative. These systems integrate Artificial Intelligence (AI) and Mixed Reality (MR) to provide realtime feedback and instruction as users practice and learn skills using physical tools and objects. However, designing MixITS systems presents challenges beyond engineering complexities. The complex interactions between users, AI, MR interfaces, and the physical environment create unique design obstacles. To address these challenges, we present MixITS-Kit—an interaction design toolkit derived from our analysis of MixITS prototypes developed by eight student teams during a 10-week-long graduate course. Our toolkit comprises design considerations, design patterns, and an interaction canvas. Our evaluation suggests that the toolkit can serve as a valuable resource for novice practitioners designing MixITS systems and researchers developing new tools for human-AI interaction design.more » « less
-
Perceiving and manipulating 3D articulated objects (e.g., cabinets, doors) in human environments is an important yet challenging task for future home-assistant robots. The space of 3D articulated objects is exceptionally rich in their myriad semantic categories, diverse shape geometry, and complicated part functionality. Previous works mostly abstract kinematic structure with estimated joint parameters and part poses as the visual representations for manipulating 3D articulated objects. In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals. We design an interaction-for-perception framework VAT-Mart to learn such actionable visual representations by simultaneously training a curiosity-driven reinforcement learning policy exploring diverse interaction trajectories and a perception module summarizing and generalizing the explored knowledge for pointwise predictions among diverse shapes. Experiments prove the effectiveness of the proposed approach using the large-scale PartNet-Mobility dataset in SAPIEN environment and show promising generalization capabilities to novel test shapes, unseen object categories, and real-world data.more » « less
-
We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.more » « less
-
Realistic object interactions are crucial for creating immersive virtual experiences, yet synthesizing realistic 3D object dynamics in response to novel interactions remains a significant challenge. Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness. However, estimating physical material properties is an open problem due to the lack of material ground-truth data, as measuring these properties for real objects is highly difficult. We present PhysDreamer, a physics-based approach that endows static 3D objects with interactive dynamics by leveraging the object dynamics priors learned by video generation models. By distilling these priors, PhysDreamer enables the synthesis of realistic object responses to novel interactions, such as external forces or agent manipulations. We demonstrate our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study. PhysDreamer takes a step towards more engaging and realistic virtual experiences by enabling static 3D objects to dynamically respond to interactive stimuli in a physically plausible manner. See our project page at this https URL.more » « less
An official website of the United States government

