Limited data availability is a challenging problem in the latent fingerprint domain. Synthetically generated fingerprints are vital for training data-hungry neural network-based algorithms. Conventional methods distort clean fingerprints to generate synthetic latent fingerprints. We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints. Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact- based fingerprints while possessing similar characteristics as real latent fingerprints. Additionally, we show that the generated fingerprints exhibit several qualities and styles, suggesting that the proposed method can generate multiple samples from a single fingerprint.
more »
« less
High Fidelity Fingerprint Generation: Quality, Uniqueness, And Privacy
In this work, we utilize progressive growth-based Generative Adversarial Networks (GANs) to develop the Clarkson Fingerprint Generator (CFG). We demonstrate that the CFG is capable of generating realistic, high fidelity, 512×512 pixels, full, plain impression fingerprints. Our results suggest that the fingerprints generated by the CFG are unique, diverse, and resemble the training dataset in terms of minutiae configuration and quality, while not revealing the underlying identities of the training data. We make the pre-trained CFG model and the synthetically generated dataset publicly available at https://github.com/keivanB/Clarkson_Finger_Gen
more »
« less
- Award ID(s):
- 1650503
- PAR ID:
- 10318826
- Date Published:
- Journal Name:
- 2021 IEEE International Conference on Image Processing (ICIP)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Faggioli, G; Ferro, N; Galuščáková, P; Herrera, A (Ed.)In the ever-changing realm of medical image processing, ImageCLEF brought a newdimension with the Identifying GAN Fingerprint task, catering to the advancement of visual media analysis. This year, the author presented the task of detecting training image fingerprints to control the quality of synthetic images for the second time (as task 1) and introduced the task of detecting generative model fingerprints for the first time (as task 2). Both tasks are aimed at discerning these fingerprints from images, on both real training images and the generative models. The dataset utilized encompassed 3D CT images of lung tuberculosis patients, with the development dataset featuring a mix of real and generated images, and the test dataset. Our team ’CSMorgan’ contributed several approaches, leveraging multiformer (combined feature extracted using BLIP2 and DINOv2) networks, additive and mode thresholding techniques, and late fusion methodologies, bolstered by morphological operations. In Task 1, our optimal performance was attained through a late fusion-based reranking strategy, achieving an F1 score of 0.51, while the additive average thresholding approach closely followed with a score of 0.504. In Task 2, our multiformer model garnered an impressive Adjusted Rand Index (ARI) score of 0.90, and a fine-tuned variant of the multiformer yielded a score of 0.8137. These outcomes underscore the efficacy of the multiformer-based approach in accurately discerning both real image and generative model fingerprints.more » « less
-
Large Language Models (LLMs) have demonstrated significant potential across various applications, but their use as AI copilots in complex and specialized tasks is often hindered by AI hallucinations, where models generate outputs that seem plausible but are incorrect. To address this challenge, we develop AutoFEA, an intelligent system that integrates LLMs with Finite Element Analysis (FEA) to automate the generation of FEA input files. Our approach features a novel planning method and a graph convolutional network (GCN)-Transformer Link Prediction retrieval model, which enhances the accuracy and reliability of the generated simulations. The AutoFEA system proceeds with key steps: dataset preparation, step-by-step planning, GCN-Transformer Link Prediction retrieval, LLM-driven code generation, and simulation using CalculiX. In this workflow, the GCN-Transformer model predicts and retrieves relevant example codes based on relationships between different steps in the FEA process, guiding the LLM in generating accurate simulation codes. We validate AutoFEA using a specialized dataset of 512 meticulously prepared FEA projects, which provides a robust foundation for training and evaluation. Our results demonstrate that AutoFEA significantly reduces AI hallucinations by grounding LLM outputs in physically accurate simulation data, thereby improving the success rate and accuracy of FEA simulations and paving the way for future advancements in AI-assisted engineering tasks.more » « less
-
Computing professionals in areas like compilers, performance analysis, and security often analyze and manipulate control flow graphs (CFGs) in their work. CFGs are directed networks that describe possible orderings of instructions in the execution of a program. Visualizing a CFG is a common activity in developing or debugging computational approaches that use them. However, general graph drawing layouts, including the hierarchical ones frequently applied to CFGs, do not capture CFG-specific structures or tasks and thus the resulting drawing may not match the needs of their audience, especially for more complicated programs. While several algorithms offer flexibility in specifying the layout, they often require expertise with graph drawing layouts and primitives that these potential users do not have. To bring domain-specific CFG drawing to this audience, we develop CFGConf, a library designed to match the abstraction level of CFG experts. CFGConf provides a JSON interface that produces drawings that can stand-alone or be integrated into multi-view visualization systems. We developed CFGConf through an interactive design process with experts while incorporating lessons learned from previous CFG visualization systems, a survey of CFG drawing conventions in computing systems conferences, and existing design principles for notations. We evaluate CFGConf in terms of expressiveness, usability, and notational efficiency through a user study and illustrative examples. CFG experts were able to use the library to produce the domain-aware layouts and appreciated the task-aware nature of the specification.more » « less
-
Dataset Description This dataset consists of processed Line-of-Sight (LoS) magnetogram images of Active Regions (ARs) from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). The images are derived from the Space-Weather HMI Active Region Patches (SHARP) data product definitive series and cover the period from May 2010 to 2018, sampled hourly. Dataset Contents: Processed Magnetogram Images: Each image represents a cropped and standardized view of an AR patch, extracted and adjusted from the original magnetograms. These images have been filtered and normalized to a size of 512×512 pixels. Processing Steps: Cropping: Magnetograms are cropped using bitmaps that define the region of interest within the AR patches. Regions smaller than 70 pixels in width are excluded. Flux Adjustment: Magnetic flux values are capped at ±256 G, with values within ±25 G set to 0 to minimize noise. Standardization: Patches are resized to 512×512 pixels using zero-padding for smaller patches or a 512×512 kernel to select regions with the maximum total unsigned flux (USFLUX) for larger patches. Normalization: Final images are scaled to fit within the range of 0-255. Data Dictionary: harp_N1_N2: These tar files contains folders where the AR patches with harp number N1 to N2 are included. complete_hourly_dataset.csv: This includes the list of hourly sampled magnetograms along with their associated goes flare class, assuming a 24 hour forecast horizon. augmentations: Five different augmentations of AR patches corresponding to GOES flare classes greater than C, assuming a 24 hour forecast horizon are listed as 5 different tar files. Look for: horizontal flip, vertical_flip, add noise, polarity change, and gaussian blur.more » « less
An official website of the United States government

