With the rapid growth of online information services, a sheer volume of news data becomes available. To help people quickly digest the explosive information,we define a newproblem – schema-based news event profiling – profiling events reported in open-domain news corpora, with a set of slots and slot-value pairs for each event, where the set of slots forms the schema of an event type. Such profiling not only provides readers with concise views of events, but also facilitates various applications such as information retrieval, knowledge graph construction and question answering. It is however a quite challenging task. The first challenge is to find out events and event types because they are both initially unknown. The second difficulty is the lack of pre-defined event-type schemas. Lastly, even with the schemas extracted, to generate event profiles from them is still essential yet demanding. To address these challenges, we propose a fully automatic, unsupervised, three-step framework to obtain event profiles. First, we develop a Bayesian non-parametric model to detect events and event types by exploiting the slot expressions of the entities mentioned in news articles. Second, we propose an unsupervised embedding model for schema induction that encodes the insight: an entity may serve as the values of multiple slots in an event, but if it appears in more sentences along with the same set of more entities in the event, its slots in these sentences tend to be similar. Finally, we build event profiles by extracting slot values for each event based on the slots’ expression patterns. To the best of our knowledge, this is the first work on schema-based profiling for news events. Experimental results on a large news corpus demonstrate the superior performance of our method against the state-of-the-art baselines on event detection, schema induction and event profiling.
more »
« less
Precise temporal slot filling via truth finding with data-driven commonsense
The task of temporal slot filling (TSF) is to extract values of specific attributes for a given entity, called “facts”, as well as temporal tags of the facts, from text data. While existing work denoted the temporal tags as single time slots, in this paper, we introduce and study the task of Precise TSF (PTSF), that is to fill two precise temporal slots including the beginning and ending time points. Based on our observation from a news corpus, most of the facts should have the two points, however, fewer than 0.1% of them have time expressions in the documents. On the other hand, the documents’ post time, though often available, is not as precise as the time expressions of being the time a fact was valid. Therefore, directly decomposing the time expressions or using an arbitrary post-time period cannot provide accurate results for PTSF. The challenge of PTSF lies in finding precise time tags in noisy and incomplete temporal contexts in the text. To address the challenge, we propose an unsupervised approach based on the philosophy of truth finding. The approach has two modules that mutually enhance each other: One is a reliability estimator of fact extractors conditionally on the temporal contexts; the other is a fact trustworthiness estimator based on the extractor’s reliability. Commonsense knowledge (e.g., one country has only one president at a specific time) was automatically generated from data and used for inferring false claims based on trustworthy facts. For the purpose of evaluation, we manually collect hundreds of temporal facts from Wikipedia as ground truth, including country’s presidential terms and sport team’s player career history. Experiments on a large news dataset demonstrate the accuracy and efficiency of our proposed algorithm.
more »
« less
- Award ID(s):
- 1849816
- PAR ID:
- 10188920
- Date Published:
- Journal Name:
- Knowledge and Information Systems
- ISSN:
- 0219-1377
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deceptive claims surround us, embedded in fake news, advertisements, political propaganda, and rumors. How do people know what to believe? Truth judgments reflect inferences drawn from three types of information: base rates, feelings, and consistency with information retrieved from memory. First, people exhibit a bias to accept incoming information, because most claims in our environments are true. Second, people interpret feelings, like ease of processing, as evidence of truth. And third, people can (but do not always) consider whether assertions match facts and source information stored in memory. This three-part framework predicts specific illusions (e.g., truthiness, illusory truth), offers ways to correct stubborn misconceptions, and suggests the importance of converging cues in a post-truth world, where falsehoods travel further and faster than the truth.more » « less
-
Proc. 2023 The Web Conf. (Ed.)Massive and fast-evolving news articles keep emerging on the web. To efectively summarize and provide concise insights into real-world events, we propose a new event knowledge extraction task Event Chain Mining in this paper. Given multiple documents abouta super event, it aims to mine a series of salient events in temporal order. For example, the event chain of super event Mexico Earthquake in 2017 is {earthquake hit Mexico, destroy houses, kill people,block roads}. This task can help readers capture the gist of textsquickly, thereby improving reading efciency and deepening text comprehension. To address this task, we regard an event as a cluster of diferent mentions of similar meanings. In this way, we can identify the diferent expressions of events, enrich their semantic knowledge and replenish relation information among them. Taking events as the basic unit, we present a novel unsupervised framework, EMiner. Specifcally, we extract event mentions from texts and merge them with similar meanings into a cluster as a single event. By jointly incorporating both content and commonsense, essential events are then selected and arranged chronologically to form an event chain. Meanwhile, we annotate a multi-document benchmark to build a comprehensive testbed for the proposed task. Extensive experiments are conducted to verify the efectiveness of EMiner in terms of both automatic and human evaluations.more » « less
-
Abstract A new hierarchy of lasting gravitational-wave effects (the higher memory effects) was recently identified in asymptotically flat spacetimes, with the better-known displacement, spin, and center-of-mass memory effects included as the lowest two orders in the set of these effects. These gravitational-wave observables are determined by a set of temporal moments of the news tensor, which describes gravitational radiation from an isolated source. The moments of the news can be expressed in terms of changes in charge-like expressions and integrals over retarded time of flux-like terms, some of which vanish in the absence of radiation. In this paper, we compute expressions for the flux-like contributions to the moments of the news in terms of a set of multipoles that characterize the gravitational-wave strain. We also identify a part of the strain that gives rise to these moments of the news. In the context of post-Newtonian theory, we show that the strain related to the moments of the news is responsible for the many nonlinear, instantaneous terms and ‘memory’ terms that appear in the post-Newtonian expressions for the radiative multipole moments of the strain. We also apply our results to compute the leading post-Newtonian expressions for the moments of the news and the corresponding strains that are generated during the inspiral of compact binary sources. These results provide a new viewpoint on the waveforms computed from the multipolar post-Minkowski formalism, and they could be used to assess the detection prospects of this new class of higher memory effects.more » « less
-
Automated event detection from news corpora is a crucial task towards mining fast-evolving structured knowledge. As real-world events have different granularities, from the top-level themes to key events and then to event mentions corresponding to concrete actions, there are generally two lines of research: (1) theme detection tries to identify from a news corpus major themes (e.g., “2019 Hong Kong Protests” versus “2020 U.S. Presidential Election”) which have very distinct semantics; and (2) action extraction aims to extract from a single document mention-level actions (e.g., “the police hit the left arm of the protester”) that are often too fine-grained for comprehending the real-world event. In this paper, we propose a new task, key event detection at the intermediate level, which aims to detect from a news corpus key events (e.g., HK Airport Protest on Aug. 12-14), each happening at a particular time/location and focusing on the same topic. This task can bridge event understanding and structuring and is inherently challenging because of (1) the thematic and temporal closeness of different key events and (2) the scarcity of labeled data due to the fast-evolving nature of news articles. To address these challenges, we develop an unsupervised key event detection framework, EvMine, that (1) extracts temporally frequent peak phrases using a novel ttf-itf score, (2) merges peak phrases into event-indicative feature sets by detecting communities from our designed peak phrase graph that captures document cooccurrences, semantic similarities, and temporal closeness signals, and (3) iteratively retrieves documents related to each key event by training a classifier with automatically generated pseudo labels from the event-indicative feature sets and refining the detected key events using the retrieved documents in each iteration. Extensive experiments and case studies show EvMine outperforms all the baseline methods and its ablations on two real-world news corpora.more » « less
An official website of the United States government

