Abstract
The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language.
Original language | English |
---|---|
Article number | 937656 |
Journal | Frontiers in Psychology |
Volume | 14 |
DOIs | |
State | Published - 2023 |
Bibliographical note
Funding Information:This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020S1A5A2A03042760).
Publisher Copyright:
Copyright © 2023 Shin, Yi and Song.
Keywords
- BERT
- grammatical illusion
- licensing strength
- negative polarity items
- neural language model
- NPI licensing
- psycholinguistics
- scale of negativity