TY - JOUR
T1 - An Experimental Investigation of Discourse Expectations in Neural Language Models
AU - Yi, Eunkyung
AU - Cho, Hyowon
AU - Song, Sanghoun
N1 - Funding Information:
* This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020S1A5A2A03042760). We thank the anonymous reviewers for their valuable comments. We also would like to thank Unsub Shin for his feedback on an earlier draft. Any remaining errors are solely our responsibility.
Publisher Copyright:
© 2022 KASELL All rights reserved.
PY - 2022
Y1 - 2022
N2 - The present study reports on three language processing experiments with most up-to-date neural language models from a psycholinguistic perspective. We investigated whether and how discourse expectations demonstrated in the psycholinguistics literature are manifested in neural language models, using the language models whose architectures and assumptions are considered most appropriate for the given language processing tasks. We first attempted to perform a general assessment of a neural model’s discourse expectations about story continuity or coherence (Experiment 1), based on the next sentence prediction module of the bidirectional transformer-based model BERT (Devlin et al. 2019). We also studied language models’ expectations about reference continuity in discursive contexts in both comprehension (Experiment 2) and production (Experiment 3) settings, based on so-called Implicit Causality biases. We used the unidirectional (or left-to-right) RNN-based model LSTM (Hochreiter and Schmidhuber 1997) and the transformer-based generation model GPT-2 (Radford et al. 2019), respectively. The results of the three experiments showed, first, that neural language models are highly successful in distinguishing between reasonably expected and unexpected story continuations in human communication and also that they exhibit human-like bias patterns in reference expectations in both comprehension and production contexts. The results of the present study suggest language models can closely simulate the discourse processing features observed in psycholinguistic experiments with human speakers. The results also suggest language models can, beyond simply functioning as a technology for practical purposes, serve as a useful research tool and/or object for the study of human discourse processing.
AB - The present study reports on three language processing experiments with most up-to-date neural language models from a psycholinguistic perspective. We investigated whether and how discourse expectations demonstrated in the psycholinguistics literature are manifested in neural language models, using the language models whose architectures and assumptions are considered most appropriate for the given language processing tasks. We first attempted to perform a general assessment of a neural model’s discourse expectations about story continuity or coherence (Experiment 1), based on the next sentence prediction module of the bidirectional transformer-based model BERT (Devlin et al. 2019). We also studied language models’ expectations about reference continuity in discursive contexts in both comprehension (Experiment 2) and production (Experiment 3) settings, based on so-called Implicit Causality biases. We used the unidirectional (or left-to-right) RNN-based model LSTM (Hochreiter and Schmidhuber 1997) and the transformer-based generation model GPT-2 (Radford et al. 2019), respectively. The results of the three experiments showed, first, that neural language models are highly successful in distinguishing between reasonably expected and unexpected story continuations in human communication and also that they exhibit human-like bias patterns in reference expectations in both comprehension and production contexts. The results of the present study suggest language models can closely simulate the discourse processing features observed in psycholinguistic experiments with human speakers. The results also suggest language models can, beyond simply functioning as a technology for practical purposes, serve as a useful research tool and/or object for the study of human discourse processing.
KW - BERT
KW - GPT-2
KW - LSTM
KW - coreference resolution
KW - discourse expectation
KW - implicit causality bias
KW - neural language model
KW - next sentence prediction
KW - surprisal
UR - http://www.scopus.com/inward/record.url?scp=85141426090&partnerID=8YFLogxK
U2 - 10.15738/kjell.22..202210.1101
DO - 10.15738/kjell.22..202210.1101
M3 - Article
AN - SCOPUS:85141426090
SN - 1598-1398
VL - 22
SP - 1101
EP - 1115
JO - Korean Journal of English Language and Linguistics
JF - Korean Journal of English Language and Linguistics
ER -