The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code

Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao


Abstract
Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like “if“, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure. Code and data are available at https://github.com/xxxiaol/magic-if.
Anthology ID:
2023.findings-acl.574
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9009–9022
Language:
URL:
https://aclanthology.org/2023.findings-acl.574
DOI:
10.18653/v1/2023.findings-acl.574
Bibkey:
Cite (ACL):
Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, and Dongyan Zhao. 2023. The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9009–9022, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code (Liu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.574.pdf