LT PhD students Changjiang Gao, Zhengwu Ma, and Prof. Jixing Li's Paper Accepted by Nature Computational Science

News Date:
News Date
Body

Congratulations! Let's give our spotlight to our LT PhD students, Changjiang GaoZhengwu Ma, and Prof. Jixing Li, for their newly accepted article, named "Scaling, but not instruction tuning, increases large language models’ alignment with language processing in the human brain," in Nature Computational Science.

Abstract
Transformer-based large language models (LLMs) have largely advanced our understanding of meaning representation in the human brain. However, increasingly large LLMs have been questioned as valid cognitive models due to their extensive training data and their ability to access context thousands of words long. In this study, we investigated whether instruction tuning, another core technique in recent LLMs beyond mere scaling, can enhance models’ ability to capture linguistic information in the human brain. We compared base and instruction-tuned LLMs of varying sizes against human behavioral and brain activity measured with eye-tracking and functional magnetic resonance imaging (fMRI) during naturalistic reading. We show that simply making LLMs larger leads to a closer match with the human brain than fine-tuning them with instructions. These findings have substantial implications for understanding the cognitive plausibility of LLMs and their role in studying naturalistic language comprehension.

 

nature

 

jixing

Click here for preprint link