arXiv Analytics

Sign in

arXiv:2304.05524 [cs.LG]AbstractReferencesReviewsResources

Understanding Causality with Large Language Models: Feasibility and Opportunities

Cheng Zhang, Stefan Bauer, Paul Bennett, Jiangfeng Gao, Wenbo Gong, Agrin Hilmkil, Joel Jennings, Chao Ma, Tom Minka, Nick Pawlowski, James Vaughan

Published 2023-04-11Version 1

We assess the ability of large language models (LLMs) to answer causal questions by analyzing their strengths and weaknesses against three types of causal question. We believe that current LLMs can answer causal questions with existing causal knowledge as combined domain experts. However, they are not yet able to provide satisfactory answers for discovering new knowledge or for high-stakes decision-making tasks with high precision. We discuss possible future directions and opportunities, such as enabling explicit and implicit causal modules as well as deep causal-aware LLMs. These will not only enable LLMs to answer many different types of causal questions for greater impact but also enable LLMs to be more trustworthy and efficient in general.

Related articles: Most relevant | Search more
arXiv:2306.08107 [cs.LG] (Published 2023-06-13)
AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks
arXiv:2306.04634 [cs.LG] (Published 2023-06-07)
On the Reliability of Watermarks for Large Language Models
arXiv:2308.07505 [cs.LG] (Published 2023-08-15)
Data Race Detection Using Large Language Models