As the number of applications for tactile feedback technology rapidly increases, so too does the need for efficient, flexible, and extensible representations of virtual textures. The previously introduced Single-Pitch Texel rendering algorithm offers designers the ability to produce textures with perceptually wide-band spectral characteristics while requiring very few input parameters. This paper expands on the capabilities of the rendering algorithm. Diverse families of fine textures, with widely varied spectral characteristics, were shown to be rendered reliably using the Texel algorithm. Furthermore, by leveraging an assistive algorithm, subjects were shown to consistently navigate the Texel parameter space in a matching task. Finally, a psychophysical study was conducted to demonstrate the rendering algorithm’s resilience to spectral quantization, further reducing the data required to represent a virtual texture.
more » « less- PAR ID:
- 10484490
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- PNAS Nexus
- Volume:
- 3
- Issue:
- 1
- ISSN:
- 2752-6542
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Mobile augmented reality (AR) has a wide range of promising applications, but its efficacy is subject to the impact of environment texture on both machine and human perception. Performance of the machine perception algorithm underlying accurate positioning of virtual content, visual-inertial SLAM (VI-SLAM), is known to degrade in low-texture conditions, but there is a lack of data in realistic scenarios. We address this through extensive experiments using a game engine-based emulator, with 112 textures and over 5000 trials. Conversely, human task performance and response times in AR have been shown to increase in environments perceived as textured. We investigate and provide encouraging evidence for invisible textures, which result in good VI-SLAM performance with minimal impact on human perception of virtual content. This arises from fundamental differences between VI-SLAM-based machine perception, and human perception as described by the contrast sensitivity function. Our insights open up exciting possibilities for deploying ambient IoT devices that display invisible textures, as part of systems which automatically optimize AR environments.more » « less
-
Inverse rendering pipelines are gaining prominence in realizing photo-realistic reconstruction of real-world objects for emulating them in virtual reality scenes. Apart from material reflectances, spectral rendering and in-scene illuminants' spectral power distributions (SPDs) play important roles in producing photo-realistic images. We present a simple, low-cost technique to capture and reconstruct the SPD of uniform illuminants. Instead of requiring a costly spectrometer for such measurements, our method uses a diffractive compact disk (CD-ROM) and a machine learning approach for accurate estimation. We show our method to work well with spotlights under simulations and few real-world examples. Presented results clearly demonstrate the reliability of our approach through quantitative and qualitative evaluations, especially in spectral rendering of iridescent materials.more » « less
-
Impedance based kinesthetic haptic devices have been a focus of study for many years. Factors such as delay and the dynamics of the device itself affect the stable rendering range of traditional active kinesthetic devices. A parallel hybrid actuation approach, which combines active energy supplying actuators and passive energy absorbing actuators into a single actuator, has recently been experimentally shown to increase the range of stable virtual stiffness a haptic device can achieve when compared to the active component of the actuator alone. This work presents both a stability and rendering range analysis that aims to identify the mechanisms and limitations by which parallel hybrid actuation increases the stable rendering range of virtual stiffness. Increases in actuator stability are analytically and experimentally shown to be linked to the stiffness of the passive actuator.more » « less
-
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera that is mounted on a cap or a VR headset. Our system renders photorealistic novel views of the actor and her motion from arbitrary virtual camera locations. Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions. We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation. For texture synthesis, we propose Ego-DPNet, a neural network that infers dense correspondences between the input fisheye images and an underlying parametric body model, and to extract textures from egocentric inputs. In addition, to encode dynamic appearances, our approach also learns an implicit texture stack that captures detailed appearance variation across poses and viewpoints. For correct pose generation, we first estimate body pose from the egocentric view using a parametric model. We then synthesize an external free-viewpoint pose image by projecting the parametric model to the user-specified target viewpoint. We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network. Experimental evaluations show that EgoRenderer is capable of generating realistic free-viewpoint avatars of a person wearing an egocentric camera. Comparisons to several baselines demonstrate the advantages of our approach.more » « less
-
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines. Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness, and features a hybrid optimization scheme for neural SDFs: first, optimize using a volumetric radiance field approach to recover correct topology, then optimize further using edgeaware physics-based surface rendering for geometry refinement and disentanglement of materials and lighting. In the second stage, we also draw inspiration from mesh-based differentiable rendering, and design a novel edge sampling algorithm for neural SDFs to further improve performance. We show that our IRON achieves significantly better inverse rendering quality compared to prior works.more » « less