Abstract
Recent natural language processing (NLP) techniques have accomplished high performance on benchmark data sets, primarily due to the significant improvement in the performance of deep learning. The advances in the research community have led to great enhancements in state-of-the-art production systems for NLP tasks, such as virtual assistants, speech recognition, and sentiment analysis. However, such NLP systems still often fail when tested with adversarial attacks. The initial lack of robustness exposed troubling gaps in current models' language understanding capabilities, creating problems when NLP systems are deployed in real life. In this paper, we present a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions. We then take a deep-dive into the various dimensions of robustness, across techniques, metrics, embedding, and benchmarks.
| Original language | English |
|---|---|
| Pages (from-to) | 86038-86056 |
| Number of pages | 19 |
| Journal | IEEE Access |
| Volume | 10 |
| DOIs | |
| State | Published - 2022 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Adversarial attacks
- Natural language processing
- Robustness
Fingerprint
Dive into the research topics of 'Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver