- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
00100010000
- More
- Availability
-
02
- Author / Contributor
- Filter by Author / Creator
-
-
Deng, Yu (1)
-
Gao, Richard (1)
-
Han, Jiawei (1)
-
He, Yi (1)
-
Jin, Yufei (1)
-
Popa, Lucian (1)
-
Shen, Yanzhen (1)
-
Shwartz, Larisa (1)
-
Zhai, ChengXiang (1)
-
Zhang, Yu (1)
-
Zhang, Yunyi (1)
-
Zhu, Xingquan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
Natarajan, Sriraam (2)
-
Dy, Jennifer (1)
-
Dy, Jennifer G (1)
-
Wooldridge, Michael (1)
-
Wooldridge, Michael J (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Wooldridge, Michael ; Dy, Jennifer ; Natarajan, Sriraam (Ed.)Free, publicly-accessible full text available February 20, 2025
-
Zhang, Yu ; Zhang, Yunyi ; Shen, Yanzhen ; Deng, Yu ; Popa, Lucian ; Shwartz, Larisa ; Zhai, ChengXiang ; Han, Jiawei ( , Proceedings of the AAAI Conference on Artificial Intelligence)Wooldridge, Michael J ; Dy, Jennifer G ; Natarajan, Sriraam (Ed.)
Accurately typing entity mentions from text segments is a fundamental task for various natural language processing applications. Many previous approaches rely on massive human-annotated data to perform entity typing. Nevertheless, collecting such data in highly specialized science and engineering domains (e.g., software engineering and security) can be time-consuming and costly, without mentioning the domain gaps between training and inference data if the model needs to be applied to confidential datasets. In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i.e., those without seed entities). To solve this problem, we propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus using the contextualized representations of pre-trained language models. It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types. Extensive experiments on two datasets covering four domains demonstrate the effectiveness of SEType in comparison with various baselines. Code and data are available at: https://github.com/yuzhimanhua/SEType.
Free, publicly-accessible full text available March 25, 2025