The ability to edit 3D assets with natural language presents a compelling paradigm to aid in the democratization of 3D content creation. However, while natural language is often effective at communicating general intent, it is poorly suited for specifying exact manipulation. To address this gap, we introduce ParSEL, a system that enablescontrollableediting of high-quality 3D assets with natural language. Given a segmented 3D mesh and an editing request, ParSEL produces aparameterizedediting program. Adjusting these parameters allows users to explore shape variations with exact control over the magnitude of the edits. To infer editing programs which align with an input edit request, we leverage the abilities of large-language models (LLMs). However, we find that although LLMs excel at identifying the initial edit operations, they often fail to infer complete editing programs, resulting in outputs that violate shape semantics. To overcome this issue, we introduce Analytical Edit Propagation (AEP), an algorithm which extends a seed edit with additional operations until a complete editing program has been formed. Unlike prior methods, AEP searches for analytical editing operations compatible with a range of possible user edits through the integration of computer algebra systems for geometric analysis. Experimentally, we demonstrate ParSEL's effectiveness in enabling controllable editing of 3D objects through natural language requests over alternative system designs.
more »
« less
EditAR: A Digital Twin Authoring Environment for Creation of AR/VR and Video Instructions from a Single Demonstration
Augmented/Virtual reality and video-based media play a vital role in the digital learning revolution to train novices in spatial tasks. However, creating content for these different media requires expertise in several fields. We present EditAR, a unified authoring, and editing environment to create content for AR, VR, and video based on a single demonstration. EditAR captures the user’s interaction within an environment and creates a digital twin, enabling users without programming backgrounds to develop content. We conducted formative interviews with both subject and media experts to design the system. The prototype was developed and reviewed by experts. We also performed a user study comparing traditional video creation with 2D video creation from 3D recordings, via a 3D editor, which uses freehand interaction for in-headset editing. Users took 5 times less time to record instructions and preferred EditAR, along with giving significantly higher usability scores.
more »
« less
- Award ID(s):
- 1839971
- PAR ID:
- 10396711
- Date Published:
- Journal Name:
- IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
- Page Range / eLocation ID:
- 326 to 335
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The environment, science, technology, engineering, arts, and mathematics fields (a collection of fields we call E-STEAM) continue to grow and remain economically and ecologically important. However, historically excluded groups remain underrepresented in science and technology professions, particularly in environmental and digital media fields. Consequently, building pathways for historically excluded students to enter economically viable and ecologically influential E-STEAM professions is critically important. These new pathways hold promise for increasing innovation within these fields and ensuring a multiplicity of representation as these fields are shaped and reshaped to attend to the plural interests of diverse communities. Consequently, this conceptual paper describes an eco-digital storytelling (EDS) approach to engaging historically excluded populations in science, technology, engineering, and mathematics (STEM). This approach offers structured learning opportunities connected to learner interests and community needs with the aim of increasing E-STEAM identity and career interest of teens from groups historically excluded from E-STEAM fields. E-STEAM identity is a meaning one can attach to oneself or that can be ascribed externally by others as individuals interact and engage in E-STEAM fields in ways that foreground the environment. The EDS approach leverages community-based action, technology and digital media, and arts and storytelling as entry points for engaging learners. EDS is designed to increase teens’ content knowledge within multiple E-STEAM fields and to provide numerous technology-rich experiences in both application of geospatial technologies (i.e., GPS, interactive maps) and digital media creation (i.e., video, animation, ArcGIS StoryMaps) as a way to shape teens’ cultural learning pathways. Examples of rich digital media presentations developed to communicate the EDS approach and local environmental opportunities, challenges, and projects are provided that exemplify how both participation in and communication of environmental action can contribute to more promising and sustainable futures.more » « less
-
Recent advances in Augmented Reality (AR) devices and their maturity as a technology offers new modalities for interaction between learners and their learning environments. Such capabilities are particularly important for learning that involves hands-on activities where there is a compelling need to: (a) make connections between knowledge-elements that have been taught at different times, (b) apply principles and theoretical knowledge in a concrete experimental setting, (c) understand the limitations of what can be studied via models and via experiments, (d) cope with increasing shortages in teaching-support staff and instructional material at the intersection of disciplines, and (e) improve student engagement in their learning. AR devices that are integrated into training and education systems can be effectively used to deliver just-in-time informatics to augment physical workspaces and learning environments with virtual artifacts. We present a system that demonstrates a solution to a critical registration problem and enables a multi-disciplinary team to develop the pedagogical content without the need for extensive coding. The most popular approach for developing AR applications is to develop a game using a standard game engine such as UNITY or UNREAL. These engines offer a powerful environment for developing a large variety of games and an exhaustive library of digital assets. In contrast, the framework we offer supports a limited range of human environment interactions that are suitable and effective for training and education. Our system offers four important capabilities – annotation, navigation, guidance, and operator safety. These capabilities are presented and described in detail. The above framework motivates a change of focus – from game development to AR content development. While game development is an intensive activity that involves extensive programming, AR content development is a multi-disciplinary activity that requires contributions from a large team of graphics designers, content creators, domain experts, pedagogy experts, and learning evaluators. We have demonstrated that such a multi-disciplinary team of experts working with our framework can use popular content creation tools to design and develop the virtual artifacts required for the AR system. These artifacts can be archived in a standard relational database and hosted on robust cloud-based backend systems for scale up. The AR content creators can own their content and Non-fungible Tokens to sequence the presentations either to improve pedagogical novelty or to personalize the learning.more » « less
-
Recent advances in Augmented Reality (AR) devices and their maturity as a technology offers new modalities for interaction between learners and their learning environments. Such capabilities are particularly important for learning that involves hands-on activities where there is a compelling need to: (a) make connections between knowledge-elements that have been taught at different times, (b) apply principles and theoretical knowledge in a concrete experimental setting, (c) understand the limitations of what can be studied via models and via experiments, (d) cope with increasing shortages in teaching-support staff and instructional material at the intersection of disciplines, and (e) improve student engagement in their learning. AR devices that are integrated into training and education systems can be effectively used to deliver just-in-time informatics to augment physical workspaces and learning environments with virtual artifacts. We present a system that demonstrates a solution to a critical registration problem and enables a multi-disciplinary team to develop the pedagogical content without the need for extensive coding. The most popular approach for developing AR applications is to develop a game using a standard game engine such as UNITY or UNREAL. These engines offer a powerful environment for developing a large variety of games and an exhaustive library of digital assets. In contrast, the framework we offer supports a limited range of human environment interactions that are suitable and effective for training and education. Our system offers four important capabilities – annotation, navigation, guidance, and operator safety. These capabilities are presented and described in detail. The above framework motivates a change of focus – from game development to AR content development. While game development is an intensive activity that involves extensive programming, AR content development is a multi-disciplinary activity that requires contributions from a large team of graphics designers, content creators, domain experts, pedagogy experts, and learning evaluators. We have demonstrated that such a multi-disciplinary team of experts working with our framework can use popular content creation tools to design and develop the virtual artifacts required for the AR system. These artifacts can be archived in a standard relational database and hosted on robust cloud-based backend systems for scale up. The AR content creators can own their content and Non-fungible Tokens to sequence the presentations either to improve pedagogical novelty or to personalize the learning.more » « less
-
Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability.more » « less
An official website of the United States government

