In this research, we take an innovative approach to the Video Corpus Visual Answer Localization (VCVAL) task using the MedVidQA dataset. We expand on it by incorporating causal inference for medical videos, a novel approach in this field. By leveraging the state-of-the-art GPT-4 and Gemini Pro 1.5 models, the system aims to localize temporal segments in videos and analyze cause-effect relationships from subtitles to enhance medical decision-making. This paper extends the work from the MedVidQA challenge by introducing causality extraction to enhance the interpretability of localized video content. Subtitles are segmented to identify causal units such as cause, effect, condition, action, and signal. Prompts guide GPT-4 and Gemini Pro 1.5 in detecting and quantifying causal structures while analyzing explicit and implicit relationships, including those spanning multiple subtitle fragments. Our results reveal that both GPT-4 and Gemini Pro 1.5 perform better when handling queries individually but face challenges in batch processing for both temporal localization and causality extraction. Despite these challenges, our innovative approach has the potential to significantly advance the field of Health Informatics. In this research, we address the Video Corpus Visual Answer Localization (VCVAL) task using the MedVidQA dataset and take it a step further by integrating causal inference for medical videos. By leveraging the state-of-the-art GPT-4 and Gemini Pro 1.5 model, our system is designed to localize temporal segments in videos and analyze cause-effect relationships from subtitles to enhance medical decision-making. Our preliminary results indicate that while both models perform well for some videos, they face challenges for most, resulting in varying performance levels. The successful integration of temporal localization with causal inference can provide significant improvement for the scalability and overall performance of medical video analysis. Our work demonstrates how AI systems can uncover valuable insights from medical videos, driving significant progress in medical AI applications and potentially making significant contributions to the field.
more »
« less
This content will become publicly available on January 1, 2027
Challenges and Opportunities in Causality Analysis Using Large Language Models
This article examines the challenges and opportunities in extracting causal information from text with Large Language Models (LLMs). It first establishes the importance of causality extraction and then explores different views on causality, including common sense ideas informing different data annotation schemes, Aristotle’s Four Causes, and Pearl’s Ladder of Causation. The paper notes the relevance of this conceptual variety for the task. The text reviews datasets and work related to finding causal expressions, both using traditional machine learning methods and LLMs. Although the known limitations of LLMs—hallucinations and lack of common sense—affect the reliability of causal findings, GPT and Gemini models (GPT-5 and Gemini 2.5 Pro and others) show the ability to conduct causality analysis; moreover, they can even apply different perspectives, such as counterfactual and Aristotelian. They are also capable of explaining and critiquing causal analyses: we report an experiment showing that in addition to largely flawless analyses, the newer models exhibit very high agreement of 88–91% on causal relationships between events—much higher than the typically reported inter-annotator agreement of 30–70%. The article concludes with a discussion of the lessons learned about these challenges and questions how LLMs might help address them in the future. For example, LLMs should help address the sparsity of annotated data. Moreover, LLMs point to a future where causality analysis in texts focuses not on annotations but on understanding, as causality is about semantics and not word spans. The Appendices and shared data show examples of LLM outputs on tasks involving causal reasoning and causal information extraction, demonstrating the models’ current abilities and limits.
more »
« less
- Award ID(s):
- 2141124
- PAR ID:
- 10657829
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Entropy
- Volume:
- 28
- Issue:
- 1
- ISSN:
- 1099-4300
- Page Range / eLocation ID:
- 23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), demonstrating significant capabilities in processing and understanding text data. However, recent studies have identified limitations in LLMs’ ability to manipulate, program, and reason about structured data, especially graphs. We introduce GraphEval36K1 , the first comprehensive graph dataset, comprising 40 graph coding problems and 36,900 test cases to evaluate the ability of LLMs on graph problem solving. Our dataset is categorized into eight primary and four sub-categories to ensure a thorough evaluation across different types of graphs. We benchmark ten LLMs, finding that private models outperform open-source ones, though the gap is narrowing. We also analyze the performance of LLMs across directed vs undirected graphs, different kinds of graph concepts, and network models. Furthermore, to improve the usability of our evaluation framework, we propose Structured Symbolic Decomposition (SSD), an instruction-based method designed to enhance LLM performance on complex graph tasks. Results show that SSD improves the average passing rate of GPT-4, GPT4o, Gemini-Pro and Claude-3-Sonnet by 8.38%, 6.78%, 29.28% and 25.28%, respectively.more » « less
-
The causal capabilities of large language models (LLMs) are a matter of significant debate, with critical implications for the use of LLMs in societally impactful domains such as medicine, science, law, and policy. We conduct a "behavorial" study of LLMs to benchmark their capability in generating causal arguments. Across a wide range of tasks, we find that LLMs can generate text corresponding to correct causal arguments with high probability, surpassing the best-performing existing methods. Algorithms based on GPT-3.5 and 4 outperform existing algorithms on a pairwise causal discovery task (97%, 13 points gain), counterfactual reasoning task (92%, 20 points gain) and event causality (86% accuracy in determining necessary and sufficient causes in vignettes). We perform robustness checks across tasks and show that the capabilities cannot be explained by dataset memorization alone, especially since LLMs generalize to novel datasets that were created after the training cutoff date. That said, LLMs exhibit unpredictable failure modes, and we discuss the kinds of errors that may be improved and what are the fundamental limits of LLM-based answers. Overall, by operating on the text metadata, LLMs bring capabilities so far understood to be restricted to humans, such as using collected knowledge to generate causal graphs or identifying background causal context from natural language. As a result, LLMs may be used by human domain experts to save effort in setting up a causal analysis, one of the biggest impediments to the widespread adoption of causal methods. Given that LLMs ignore the actual data, our results also point to a fruitful research direction of developing algorithms that combine LLMs with existing causal techniques. Code and datasets are available at https://github.com/py-why/pywhy-llm.more » « less
-
Entity bias widely affects pretrained (large) language models, causing them to rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient semantic information from similar entities. Under the white-box setting, our training-time intervention improves OOD performance of PLMs on relation extraction (RE) and machine reading comprehension (MRC) by 5.7 points and by 9.1 points, respectively. Under the black-box setting, our in-context intervention effectively reduces the entity-based knowledge conflicts of GPT-3.5, achieving up to 20.5 points of improvement of exact match accuracy on MRC and up to 17.6 points of reduction in memorization ratio on RE.more » « less
-
Chemical reaction data has existed and still largely exists in unstructured forms. But curating such information into datasets suitable for tasks such as yield and reaction outcome prediction is impractical via manual curation and not possible to automate through programmatic means alone. Large language models (LLMs) have emerged as potent tools, showcasing remarkable capabilities in processing textual information and therefore could be extremely useful in automating this process. To address the challenge of unstructured data, we manually curated a dataset of structured chemical reaction data to fine-tune and evaluate LLMs. We propose a paradigm that leverages prompt-tuning, fine-tuning techniques, and a verifier to check the extracted information. We evaluate the capabilities of various LLMs, including LLAMA-2 and GPT models with different parameter counts, on the data extraction task. Our results show that prompt tuning of GPT-4 yields the best accuracy and evaluation results. Fine-tuning LLAMA-2 models with hundreds of samples does enable them and organize scientific material according to user-defined schemas better though. This workflow shows an adaptable approach for chemical reaction data extraction but also highlights the challenges associated with nuance in chemical information. We open-sourced our code at GitHub.more » « less
An official website of the United States government
