skip to main content


Title: Bayesian Graphical Entity Resolution using Exchangeable Random Partition Priors
Abstract

Entity resolution (record linkage or deduplication) is the process of identifying and linking duplicate records in databases. In this paper, we propose a Bayesian graphical approach for entity resolution that links records to latent entities, where the prior representation on the linkage structure is exchangeable. First, we adopt a flexible and tractable set of priors for the linkage structure, which corresponds to a special class of random partition models. Second, we propose a more realistic distortion model for categorical/discrete record attributes, which corrects a logical inconsistency with the standard hit-miss model. Third, we incorporate hyperpriors to improve flexibility. Fourth, we employ a partially collapsed Gibbs sampler for inferential speedups. Using a selection of private and nonprivate data sets, we investigate the impact of our modeling contributions and compare our model with two alternative Bayesian models. In addition, we conduct a simulation study for household survey data, where we vary distortion, duplication rates and data set size. We find that our model performs more consistently than the alternatives across a variety of scenarios and typically achieves the highest entity resolution accuracy (F1 score). Open source software is available for our proposed methodology, and we provide a discussion regarding our work and future directions.

 
more » « less
NSF-PAR ID:
10390204
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Journal of Survey Statistics and Methodology
Volume:
11
Issue:
3
ISSN:
2325-0984
Page Range / eLocation ID:
p. 569-596
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Martin, Andreas ; Hinkelmann, Knut ; Fill, Hans-Georg ; Gerber, Aurona ; Lenat, Doug ; Stolle, Reinhard ; Harmelen, Frank van (Ed.)
    Record linkage, often called entity resolution or de-duplication, refers to identifying the same entities across one or more databases. As the amount of data that is generated grows at an exponential rate, it becomes increasingly important to be able to integrate data from several sources to perform richer analysis. In this paper, we present an open source comprehensive end to end hybrid record linkage framework that combines the automatic and manual review process. Using this framework, we train several models based on different machine learning algorithms such as random forests, linear SVM, Radial SVM, and Dense Neural Networks and compare the effectiveness and efficiency of these models for record linkage in different settings. We evaluate model performance based on Recall, F1-score (quality of linkages) and number of uncertain pairs which is the number of pairs that need manual review. We also test our trained models in a new dataset to test how different trained models transfer to a new setting. The RF, linear SVM and radial SVM models transfer much better compared to the DNN. Finally, we study the effect of name2vec (n2v) feature, a letter embedding in names, on model performance. Using n2v results in smaller manual review set with slightly less F1-score. Overall the SVM models performed best in all experiments. 
    more » « less
  2. Abstract

    Study of the late Quaternary geomagnetic field contributes significantly to understanding the origin of millennial‐scale paleomagnetic secular variations, the structure of geomagnetic excursions, and the long‐term shielding by the geomagnetic field. A compilation of paleomagnetic sediment records and archeomagnetic and lava flow data covering the past 100 ka enables reconstruction of the global geomagnetic field on such long‐term scales. We use regularized inversion to build the first global, time‐dependent, geomagnetic field model spanning the past 100 ka, namedGGF100k(GlobalGeomagneticField over the past100 ka). Spatial parametrization of the model is in spherical harmonics and time variations with cubic splines. The model is heavily constrained by more than 100 continuous sediment records covering extended periods of time, which strongly prevail over the limited number of discrete snapshots provided by archeomagnetic and volcanic data. Following an assessment of temporal resolution in each sediment's magnetic record, we have introduced smoothing kernels into the forward modeling when assessing data misfit. This accommodates the smoothing inherent in the remanence acquisition in individual sediment paleomagnetic records, facilitating a closer fit to both high‐ and low‐resolution records in regions where some sediments have variable temporal resolutions. The model has similar spatial resolution but less temporal complexity than current Holocene geomagnetic field models. Using the new reconstruction, we discuss dipole moment variations, the time‐averaged field, and paleomagnetic secular variation activity. The new GGF100k model fills the gap in the geomagnetic power spectrum in the frequency range 100–1,000 Ma−1.

     
    more » « less
  3. Multi-source entity linkage focuses on integrating knowledge from multiple sources by linking the records that represent the same real world entity. This is critical in high-impact applications such as data cleaning and user stitching. The state-of-the-art entity linkage pipelines mainly depend on supervised learning that requires abundant amounts of training data. However, collecting well-labeled training data becomes expensive when the data from many sources arrives incrementally over time. Moreover, the trained models can easily overfit to specific data sources, and thus fail to generalize to new sources due to significant differences in data and label distributions. To address these challenges, we present AdaMEL, a deep transfer learning framework that learns generic high-level knowledge to perform multi-source entity linkage. AdaMEL models the attribute importance that is used to match entities through an attribute-level self-attention mechanism, and leverages the massive unlabeled data from new data sources through domain adaptation to make it generic and data-source agnostic. In addition, AdaMEL is capable of incorporating an additional set of labeled data to more accurately integrate data sources with different attribute importance. Extensive experiments show that our framework achieves state-of-the-art results with 8.21% improvement on average over methods based on supervised learning. Besides, it is more stable in handling different sets of data sources in less runtime. 
    more » « less
  4. Merging datafiles containing information on overlapping sets of entities is a challenging task in the absence of unique identifiers, and is further complicated when some entities are duplicated in the datafiles. Most approaches to this problem have focused on linking two files assumed to be free of duplicates, or on detecting which records in a single file are duplicates. However, it is common in practice to encounter scenarios that fit somewhere in between or beyond these two settings. We propose a Bayesian approach for the general setting of multifile record linkage and duplicate detection. We use a novel partition representation to propose a structured prior for partitions that can incorporate prior information about the data collection processes of the datafiles in a flexible manner, and extend previous models for comparison data to accommodate the multifile setting. We also introduce a family of loss functions to derive Bayes estimates of partitions that allow uncertain portions of the partitions to be left unresolved. The performance of our proposed methodology is explored through extensive simulations. Supplementary materials for this article are available online. 
    more » « less
  5. SUMMARY

    The spectral element method is currently the method of choice for computing accurate synthetic seismic wavefields in realistic 3-D earth models at the global scale. However, it requires significantly more computational time, compared to normal mode-based approximate methods. Source stacking, whereby multiple earthquake sources are aligned on their origin time and simultaneously triggered, can reduce the computational costs by several orders of magnitude. We present the results of synthetic tests performed on a realistic radially anisotropic 3-D model, slightly modified from model SEMUCB-WM1 with three component synthetic waveform ‘data’ for a duration of 10 000 s, and filtered at periods longer than 60 s, for a set of 273 events and 515 stations. We consider two definitions of the misfit function, one based on the stacked records at individual stations and another based on station-pair cross-correlations of the stacked records. The inverse step is performed using a Gauss–Newton approach where the gradient and Hessian are computed using normal mode perturbation theory. We investigate the retrieval of radially anisotropic long wavelength structure in the upper mantle in the depth range 100–800 km, after fixing the crust and uppermost mantle structure constrained by fundamental mode Love and Rayleigh wave dispersion data. The results show good performance using both definitions of the misfit function, even in the presence of realistic noise, with degraded amplitudes of lateral variations in the anisotropic parameter ξ. Interestingly, we show that we can retrieve the long wavelength structure in the upper mantle, when considering one or the other of three portions of the cross-correlation time series, corresponding to where we expect the energy from surface wave overtone, fundamental mode or a mixture of the two to be dominant, respectively. We also considered the issue of missing data, by randomly removing a successively larger proportion of the available synthetic data. We replace the missing data by synthetics computed in the current 3-D model using normal mode perturbation theory. The inversion results degrade with the proportion of missing data, especially for ξ, and we find that a data availability of 45 per cent or more leads to acceptable results. We also present a strategy for grouping events and stations to minimize the number of missing data in each group. This leads to an increased number of computations but can be significantly more efficient than conventional single-event-at-a-time inversion. We apply the grouping strategy to a real picking scenario, and show promising resolution capability despite the use of fewer waveforms and uneven ray path distribution. Source stacking approach can be used to rapidly obtain a starting 3-D model for more conventional full-waveform inversion at higher resolution, and to investigate assumptions made in the inversion, such as trade-offs between isotropic, anisotropic or anelastic structure, different model parametrizations or how crustal structure is accounted for.

     
    more » « less