skip to main content


This content will become publicly available on August 7, 2025

Title: Enabling Paper-Based Surface Authentication via Digital Twin and Experimental Verification
Paper surfaces can be used for anticounterfeiting due to their inherent and physically unclonable irregularities. Prior work used mobile cameras to capture paper's microstructure with the help of camera flash. However, prolonged exposure to flash in the workplace may harm the eyes of workers involved in the authentication process. This work proposes an authentication method that exploits indoor lighting without the need for a camera flash. Indoor lighting has a lower strength and leads to interference due to secondary reflections, making it challenging to achieve a good authentication performance. To this end, we create a digital twin (DT) replication of a real world in which paper patches are captured under multiple lights, taking account of key physics and optical laws. From simulations of DT, we identify important factors to the authentication performance and design an authentication method for an office setup. We have experimented with three different types of paper and showed that the DT-guided authentication method can achieve satisfactory authentication performance without using active light sources.  more » « less
Award ID(s):
2227261 2227499
PAR ID:
10549387
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Volume:
MIPR 2024
ISBN:
979-8-3503-5142-2
Page Range / eLocation ID:
430 to 438
Subject(s) / Keyword(s):
Authentication Lighting Cameras Reflection Digital twins Optical reflection Microstructure Surface treatment Physics Light sources
Format(s):
Medium: X
Location:
San Jose, CA, USA (Int’l Conf. on Multimedia Info. Processing and Retrieval -- MIPR)
Sponsoring Org:
National Science Foundation
More Like this
  1. Real-world lighting often consists of multiple illuminants with different spectra. Separating and manipulating these illuminants in post-process is a challenging problem that requires either significant manual input or calibrated scene geometry and lighting. In this work, we leverage a flash/no-flash image pair to analyze and edit scene illuminants based on their spectral differences. We derive a novel physics-based relationship between color variations in the observed flash/no-flash intensities and the spectra and surface shading corresponding to individual scene illuminants. Our technique uses this constraint to automatically separate an image into constituent images lit by each illuminant. This separation can be used to support applications like white balancing, lighting editing, and RGB photometric stereo, where we demonstrate results that outperform state-of-the-art techniques on a wide range of images. 
    more » « less
  2. Lighting understanding plays an important role in virtual object composition, including mobile augmented reality (AR) applications. Prior work often targets recovering lighting from the physical environment to support photorealistic AR rendering. Because the common workflow is to use a back-facing camera to capture the physical world for overlaying virtual objects, we refer to this usage pattern as back-facing AR. However, existing methods often fall short in supporting emerging front-facing mobile AR applications, e.g., virtual try-on where a user leverages a front-facing camera to explore the effect of various products (e.g., glasses or hats) of different styles. This lack of support can be attributed to the unique challenges of obtaining 360° HDR environment maps, an ideal format of lighting representation, from the front-facing camera and existing techniques. In this paper, we propose to leverage dual-camera streaming to generate a high-quality environment map by combining multi-view lighting reconstruction and parametric directional lighting estimation. Our preliminary results show improved rendering quality using a dual-camera setup for front-facing AR compared to a commercial solution. 
    more » « less
  3. Abstract

    Many vision‐based indoor localization methods require tedious and comprehensive pre‐mapping of built environments. This research proposes a mapping‐free approach to estimating indoor camera poses based on a 3D style‐transferred building information model (BIM) and photogrammetry technique. To address the cross‐domain gap between virtual 3D models and real‐life photographs, a CycleGAN model was developed to transform BIM renderings into photorealistic images. A photogrammetry‐based algorithm was developed to estimate camera pose using the visual and spatial information extracted from the style‐transferred BIM. The experiments demonstrated the efficacy of CycleGAN in bridging the cross‐domain gap, which significantly improved performance in terms of image retrieval and feature correspondence detection. With the 3D coordinates retrieved from BIM, the proposed method can achieve near real‐time camera pose estimation with an accuracy of 1.38 m and 10.1° in indoor environments.

     
    more » « less
  4. Object detection and semantic segmentation are two of the most widely adopted deep learning algorithms in agricultural applications. One of the major sources of variability in image quality acquired in the outdoors for such tasks is changing lighting condition that can alter the appearance of the objects or the contents of the entire image. While transfer learning and data augmentation to some extent reduce the need for large amount of data to train deep neural networks, the large variety of cultivars and the lack of shared datasets in agriculture makes wide-scale field deployments difficult. In this paper, we present a high throughput robust active lighting-based camera system that generates consistent images in all lighting conditions. We detail experiments that show the consistency in images quality leading to relatively fewer images to train deep neural networks for the task of object detection. We further present results from field experiment under extreme lighting conditions where images without active lighting significantly lack to provide consistent results. The experimental results show that on average, deep nets for object detection trained on consistent data required nearly four times less data to achieve similar level of accuracy. This proposed work could potentially provide pragmatic solutions to computer vision needs in agriculture. 
    more » « less
  5. User authentication is a critical process in both corporate and home environments due to the ever-growing security and privacy concerns. With the advancement of smart cities and home environments, the concept of user authentication is evolved with a broader implication by not only preventing unauthorized users from accessing confidential information but also providing the opportunities for customized services corresponding to a specific user. Traditional approaches of user authentication either require specialized device installation or inconvenient wearable sensor attachment. This paper supports the extended concept of user authentication with a device-free approach by leveraging the prevalent WiFi signals made available by IoT devices, such as smart refrigerator, smart TV and thermostat, etc. The proposed system utilizes the WiFi signals to capture unique human physiological and behavioral characteristics inherited from their daily activities, including both walking and stationary ones. Particularly, we extract representative features from channel state information (CSI) measurements of WiFi signals, and develop a deep learning based user authentication scheme to accurately identify each individual user. Extensive experiments in two typical indoor environments, a university office and an apartment, are conducted to demonstrate the effectiveness of the proposed authentication system. In particular, our system can achieve over 94% and 91% authentication accuracy with 11 subjects through walking and stationary activities, respectively. 
    more » « less