skip to main content


Title: Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings
There are multiple technological ways to identify humans and verify claimed identities. The dataset presented herein facilitates work on hard and soft biometric human identification and identity verification. It is comprised of full-body images of multiple fully clothed males from a constrained age range. The images have been taken from multiple perspectives with varied lighting brightness and temperature.  more » « less
Award ID(s):
1757659
NSF-PAR ID:
10093803
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Data
Volume:
4
Issue:
1
ISSN:
2306-5729
Page Range / eLocation ID:
3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Agaian, Sos S. ; DelMarco, Stephen P. ; Asari, Vijayan K. (Ed.)
    Iris recognition is a widely used biometric technology that has high accuracy and reliability in well-controlled environments. However, the recognition accuracy can significantly degrade in non-ideal scenarios, such as off-angle iris images. To address these challenges, deep learning frameworks have been proposed to identify subjects through their off-angle iris images. Traditional CNN-based iris recognition systems train a single deep network using multiple off-angle iris image of the same subject to extract the gaze invariant features and test incoming off-angle images with this single network to classify it into same subject class. In another approach, multiple shallow networks are trained for each gaze angle that will be the experts for specific gaze angles. When testing an off-angle iris image, we first estimate the gaze angle and feed the probe image to its corresponding network for recognition. In this paper, we present an analysis of the performance of both single and multimodal deep learning frameworks to identify subjects through their off-angle iris images. Specifically, we compare the performance of a single AlexNet with multiple SqueezeNet models. SqueezeNet is a variation of the AlexNet that uses 50x fewer parameters and is optimized for devices with limited computational resources. Multi-model approach using multiple shallow networks, where each network is an expert for a specific gaze angle. Our experiments are conducted on an off-angle iris dataset consisting of 100 subjects captured at 10-degree intervals between -50 to +50 degrees. The results indicate that angles that are more distant from the trained angles have lower model accuracy than the angles that are closer to the trained gaze angle. Our findings suggest that the use of SqueezeNet, which requires fewer parameters than AlexNet, can enable iris recognition on devices with limited computational resources while maintaining accuracy. Overall, the results of this study can contribute to the development of more robust iris recognition systems that can perform well in non-ideal scenarios. 
    more » « less
  2. Facial attribute recognition is conventionally computed from a single image. In practice, each subject may have multiple face images. Taking the eye size as an example, it should not change, but it may have different estimation in multiple images, which would make a negative impact on face recognition. Thus, how to compute these attributes corresponding to each subject rather than each single image is a profound work. To address this question, we deploy deep training for facial attributes prediction, and we explore the inconsistency issue among the attributes computed from each single image. Then, we develop two approaches to address the inconsistency issue. Experimental results show that the proposed methods can handle facial attribute estimation on either multiple still images or video frames, and can correct the incorrectly annotated labels. The experiments are conducted on two large public databases with annotations of facial attributes. 
    more » « less
  3. We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains. 
    more » « less
  4. ABSTRACT

    We report on the low Galactic latitude (b = 4${_{.}^{\circ}}$3) quasar 2005 + 403, the second active galactic nuclei, in which we detected a rare phenomenon of multiple imaging induced by refractive-dominated scattering. The manifestation of this propagation effect is revealed at different frequencies (≲ 8 GHz) and epochs of Very Long Baseline Array (VLBA) observations. The pattern formed by anisotropic scattering is stretched out along the line of constant Galactic latitude with a local position angle, PA ≈ 40° showing 1–2 sub-images, often on either side of the core. Analysing the multifrequency VLBA data ranging from 1.4 to 43.2 GHz, we found that both the angular size of the apparent core component and the separation between the primary and secondary core images follow a wavelength squared dependence, providing convincing evidence for a plasma scattering origin for the multiple imaging. Based on the Owens Valley Radio Observatory long-term monitoring data at 15 GHz obtained for 2005 + 403, we identified the characteristic flux density excursions occurred in 2019 April and May and attributed to an extreme scattering event (ESE) associated with the passage of a plasma lens across the line of sight. Modelling the ESE, we determined that the angular size of the screen is 0.4 mas and it drifts with the proper motion of 4.4 mas yr−1. Assuming that the scattering screen is located in the highly turbulent Cygnus region, the transverse linear size and speed of the lens with respect to the observer are 0.7 au and 37 km s−1, respectively.

     
    more » « less
  5. Abstract Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. Here we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease, thus providing a new tool to help address this important challenge. We find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that provides unique structural insight (i.e., molecular maps or “optical stains") of thin tissue sections with subcellular (nanoscale) resolution. We show that this phenotypical continuum can also be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. In addition to providing several novel “optical stains” with contrast for disease, we also adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images, thus providing multiple stains (including the gold-standard H&E) from the same unlabeled specimen. Agreement between the virtual H&E images and the H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis. 
    more » « less