Title: Deterministic bibliometric disambiguation challenges in company names
Peer-reviewed publications and patents serve as important signatures of knowledge generation, and therefore the authors and their organizations can represent agents of intellectual transformation. Accurate tracking of these players enables scholars to follow knowledge evolution. However, while author name disambiguation has been discussed extensively, less is known about the impact of organization name on bibliometric studies. We expand here on the recently defined phenomenon of "onomastic profusion," high-frequency words used in organization names for semantic reasons, and thus contributing a non-random source of error to bibliographic studies. We use the Small Business Innovation Research (SBIR) Phase I awardees of the National Aeronautics and Space Administration (NASA) as a use case in the field of engineering innovation. We find that firms in California or Massachusetts experience a six percent decrease in the likelihood of using the word "Technologies" in their names. Furthermore, use of the words "Research" and "Science" is linked to doubling the number of awards. We illustrate that, in aggregate, firms executing rational strategic naming decisions can create deterministic bibliometric challenges. more »« less
Meng, Yu; Zhang, Yunyi; Huang, Jiaxin; Xiong, Chenyan; Ji, Heng; Zhang, Chao; Han, Jiawei
(, EMNLP'20: 2020 Conf. on Empirical Methods in Natural Language Processing, Nov. 2020)
null
(Ed.)
Current text classification methods typically require a good number of human-labeled documents as training data, which can be costly and difficult to obtain in real applications. Hu-mans can perform classification without seeing any labeled examples but only based on a small set of words describing the categories to be classified. In this paper, we explore the potential of only using the label name of each class to train classification models on un-labeled data, without using any labeled documents. We use pre-trained neural language models both as general linguistic knowledge sources for category understanding and as representation learning models for document classification. Our method (1) associates semantically related words with the label names, (2) finds category-indicative words and trains the model to predict their implied categories, and (3) generalizes the model via self-training. We show that our model achieves around 90% ac-curacy on four benchmark datasets including topic and sentiment classification without using any labeled documents but learning from unlabeled data supervised by at most 3 words (1 in most cases) per class as the label name1.
Huang, Zhiqi; Rahimi, Razieh; Yu, Puxuan; Shang, Jingbo; Allan, James
(, Proceedings of The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 21))
null
(Ed.)
Inferring the set name of semantically grouped entities is useful in many tasks related to natural language processing and information retrieval. Previous studies mainly draw names from knowledge bases to ensure high quality, but that limits the candidate scope. We propose an unsupervised framework, AutoName, that exploits large-scale text corpora to name a set of query entities. Specifically, it first extracts hypernym phrases as candidate names from query-related documents via probing a pre-trained language model. A hierarchical density-based clustering is then applied to form potential concepts for these candidate names. Finally, AutoName ranks candidates and picks the top one as the set name based on constituents of the phrase and the semantic similarity of their concepts. We also contribute a new benchmark dataset for this task, consisting of 130 entity sets with name labels. Experimental results show that AutoName generates coherent and meaningful set names and significantly outperforms all compared methods. Further analyses show that AutoName is able to offer explanations for extracted names using the sentences most relevant to the corresponding concept.
Dmitriev, Dmitry
(, Biodiversity Information Science and Standards)
The 3i World Auchenorrhyncha database (http://dmitriev.speciesfile.org) is being migrated into TaxonWorks (http://taxonworks.org) and comprises nomenclatural data for all known Auchenorrhyncha taxa (leafhoppers, planthoppers, treehoppers, cicadas, spittle bugs). Of all those scientific names, 8,700 are unique genus-group names (which include valid genera and subgenera as well as their synonyms). According to the Rules of Zoological Nomenclature, a properly formed species-group name when combined with a genus-group name must agree with the latter in gender if the species-group name is or ends with a Latin or Latinized adjective or participle. This provides a double challenge for researchers describing new or citing existing taxa. For each species, the knowledge about the part of speech is essential information (nouns do not change their form when associated with different generic names). For the genus, the knowledge of the gender is essential information. Every time the species is transferred from one genus to another, its ending may need to be transformed to make a proper new scientific name (a binominal name). In modern day practice, it is important, when establishing a new name, to provide information about etymology of this name and the ways it should be used in the future publications: the grammatical gender for a genus, and the part of speech for a species. The older names often do not provide enough information about their etymology to make proper construction of scientific names. That is why in the literature, we can find numerous cases where a scientific name is not formed in conformity to the Rules of Nomenclature. An attempt was made to resolve the etymology of the generic names in Auchenorrhyncha to unify and clarify nomenclatural issues in this group of insects. In TaxonWorks, the rules of nomenclature are defined using the NOMEN onthology (https://github.com/SpeciesFileGroup/nomen).
Kim, Jinseok; Kim, Jenna; Owen‐Smith, Jason
(, Journal of the Association for Information Science and Technology)
Abstract In several author name disambiguation studies, some ethnic name groups such as East Asian names are reported to be more difficult to disambiguate than others. This implies that disambiguation approaches might be improved if ethnic name groups are distinguished before disambiguation. We explore the potential of ethnic name partitioning by comparing performance of four machine learning algorithms trained and tested on the entire data or specifically on individual name groups. Results show that ethnicity‐based name partitioning can substantially improve disambiguation performance because the individual models are better suited for their respective name group. The improvements occur across all ethnic name groups with different magnitudes. Performance gains in predicting matched name pairs outweigh losses in predicting nonmatched pairs. Feature (e.g., coauthor name) similarities of name pairs vary across ethnic name groups. Such differences may enable the development of ethnicity‐specific feature weights to improve prediction for specific ethic name categories. These findings are observed for three labeled data with a natural distribution of problem sizes as well as one in which all ethnic name groups are controlled for the same sizes of ambiguous names. This study is expected to motive scholars to group author names based on ethnicity prior to disambiguation.
Carley, Stephen F.; Porter, Alan L.; Youtie, Jan L.
(, Journal of Data and Information Science)
Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage.
@article{osti_10480696,
place = {Country unknown/Code not available},
title = {Deterministic bibliometric disambiguation challenges in company names},
url = {https://par.nsf.gov/biblio/10480696},
DOI = {10.1109/ICSC56153.2023.00047},
abstractNote = {Peer-reviewed publications and patents serve as important signatures of knowledge generation, and therefore the authors and their organizations can represent agents of intellectual transformation. Accurate tracking of these players enables scholars to follow knowledge evolution. However, while author name disambiguation has been discussed extensively, less is known about the impact of organization name on bibliometric studies. We expand here on the recently defined phenomenon of "onomastic profusion," high-frequency words used in organization names for semantic reasons, and thus contributing a non-random source of error to bibliographic studies. We use the Small Business Innovation Research (SBIR) Phase I awardees of the National Aeronautics and Space Administration (NASA) as a use case in the field of engineering innovation. We find that firms in California or Massachusetts experience a six percent decrease in the likelihood of using the word "Technologies" in their names. Furthermore, use of the words "Research" and "Science" is linked to doubling the number of awards. We illustrate that, in aggregate, firms executing rational strategic naming decisions can create deterministic bibliometric challenges.},
journal = {Proceedings},
publisher = {IEEE},
author = {Belz, A.},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.