This content will become publicly available on October 14, 2025
- Award ID(s):
- 2107409
- PAR ID:
- 10555074
- Publisher / Repository:
- IEEE Visualization
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
An accurate understanding of omnidirectional environment lighting is crucial for high-quality virtual object rendering in mobile augmented reality (AR). In particular, to support reflective rendering, existing methods have leveraged deep learning models to estimate or have used physical light probes to capture physical lighting, typically represented in the form of an environment map. However, these methods often fail to provide visually coherent details or require additional setups. For example, the commercial framework ARKit uses a convolutional neural network that can generate realistic environment maps; however the corresponding reflective rendering might not match the physical environments. In this work, we present the design and implementation of a lighting reconstruction framework called LITAR that enables realistic and visually-coherent rendering. LITAR addresses several challenges of supporting lighting information for mobile AR. First, to address the spatial variance problem, LITAR uses two-field lighting reconstruction to divide the lighting reconstruction task into the spatial variance-aware near-field reconstruction and the directional-aware far-field reconstruction. The corresponding environment map allows reflective rendering with correct color tones. Second, LITAR uses two noise-tolerant data capturing policies to ensure data quality, namely guided bootstrapped movement and motion-based automatic capturing. Third, to handle the mismatch between the mobile computation capability and the high computation requirement of lighting reconstruction, LITAR employs two novel real-time environment map rendering techniques called multi-resolution projection and anchor extrapolation. These two techniques effectively remove the need of time-consuming mesh reconstruction while maintaining visual quality. Lastly, LITAR provides several knobs to facilitate mobile AR application developers making quality and performance trade-offs in lighting reconstruction. We evaluated the performance of LITAR using a small-scale testbed experiment and a controlled simulation. Our testbed-based evaluation shows that LITAR achieves more visually coherent rendering effects than ARKit. Our design of multi-resolution projection significantly reduces the time of point cloud projection from about 3 seconds to 14.6 milliseconds. Our simulation shows that LITAR, on average, achieves up to 44.1% higher PSNR value than a recent work Xihe on two complex objects with physically-based materials.more » « less
-
Display technologies in the fields of virtual and augmented reality affect the appearance of human representations, such as avatars used in telepresence or entertainment applications, based on the user’s current viewing conditions. With changing viewing conditions, it is possible that the perceived appearance of one’s avatar changes in an unexpected or undesired manner, which may change user behavior towards these avatars and cause frustration in using the AR display. In this paper, we describe a user study (N=20) where participants saw themselves in a mirror standing next to their own avatar through use of a HoloLens 2 optical see-through head-mounted display. Participants were tasked to match their avatar’s appearance to their own under two environment lighting conditions (200 lux and 2,000 lux). Our results showed that the intensity of environment lighting had a significant effect on participants selected skin colors for their avatars, where participants with dark skin colors tended to make their avatar’s skin color lighter, nearly to the level of participants with light skin color. Further, in particular female participants made their avatar’s hair color darker for the lighter environment lighting condition. We discuss our results with a view on technological limitations and effects on the diversity of avatar representations on optical see-through displays.more » « less
-
Abstract The “Light Environment Hypothesis” (
LEH ) proposes that evolution of interspecific variation in plumage color is driven by variation in light environments across habitats. If ambient light has the potential to drive interspecific variation, a similar influence should be expected for intraspecific recognition, as color signals are an adaptive response to the change in ambient light levels in different habitats. Using spectrometry, avian‐appropriate models of vision, and phylogenetic comparative methods, I quantified dichromatism and tested theLEH in both intra‐ and interspecific contexts in 33 Amazonian species from the infraorder Furnariides living in environments with different light levels. Although these birds are sexually monochromatic to humans, 81.8% of the species had at least one dichromatic patch in their plumage, mostly from dorsal areas, which provides evidence for a role for dichromatism in sex recognition. Furthermore, birds from habitats with high levels of ambient light had higher dichromatism levels, as well as brighter, more saturated, and more diverse plumages, suggesting that visual communication is less constrained in these habitats. Overall, my results provide support for theLEH and suggest that ambient light plays a major role in the evolution of color signals in this group of birds in both intra‐ and interspecific contexts. Additionally, plumage variation across light environments for these drab birds highlights the importance of considering ambient light and avian‐appropriate models of vision when studying the evolution of color signals in birds. -
A recent data visualization literacy study shows that most people cannot read networks that use hierarchical cluster representations such as “supernoding” and “edge bundling.” Other studies that compare standard node-link representations with map-like visualizations show that map-like visualizations are superior in terms of task performance, memorization and engagement. With this in mind, we propose the Zoomable Multi-Level Tree (ZMLT) algorithm for maplike visualization of large graphs that is representative, real, persistent, overlapfree labeled, planar, and compact. These six desirable properties are formalized with the following guarantees: (1) The abstract and embedded trees represent the underlying graph appropriately at different level of details (in terms of the structure of the graph as well as the embedding thereof); (2) At every level of detail we show real vertices and real paths from the underlying graph; (3) If any node or edge appears in a given level, then they also appear in all deeper levels; (4) All nodes at the current level and higher levels are labeled and there are no label overlaps; (5) There are no crossings on any level; (6) The drawing area is proportional to the total area of the labels. This algorithm is implemented and we have a functional prototype for the interactive interface in a web browser.more » « less
-
null (Ed.)Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments.more » « less